Agility with Compassion

Coaching, Facilitation & Training

Modelling system quality

I see many people struggle with quality requirements, including experienced business analysts and requirements engineers. Worse still, sometimes people just ignore quality requirements because they don’t know how to specify them properly. While quality requirements may be a bit harder to specify than functional requirements, they are often crucial to the success of a product. To make matters worse, many of these quality requirements impact the architecture of the whole system; they can’t just be added on at the last minute. I’d say that makes it well worth investing some time to properly specify what quality levels are required. Here are some of my tips.

What are we talking about?

First let me define what I mean with “quality requirements”. My favorite definition is based on Tom Gilb’s excellent book Competitive Engineering:

A quality requirement expresses ‘how well ’ a system will perform.

This is a simple and useful definition, provided you take a broad interpretation of “perform”. Perform in this context does not mean “performance” in the sense of throughput or response time. Instead it refers to any aspect you care about: how well a system performs with respect to usability, security, availability, performance, learnability, etc.

Quality requirements are sometimes called non-functional requirements, though in most cases non-functional requirements include other types of requirements too, such as constraints.

Getting started

When you start out on a new product or a major change, it is important to find out which quality aspects are most important. Look for unique selling points as well as dissatisfiers (“Must-be quality” in Kano terminology). The companies core values or market proposition may also provide clues on important quality aspects. Select the top 3 – 5 quality aspects, and focus on those.

Why focus on just a few? It is usually hard for a system to do well on many different quality aspects (fast response time, high security, high availability, many transactions per second, many concurrent users etc.), so you really need to find out which are most important and get them right. Given a few important quality aspects, engineers can often design a suitable system.

Also, the system will still have a basic performance in the quality aspects you don’t specify: if you omit user friendliness requirements you don’t get zero user friendliness, but it may not be the easiest system to use.

Selecting quality aspects

If your stakeholders can’t think of any quality aspects, there are many ways in which you can help them. I often ask “What targets do you have?” or similar questions to get people started. Alternatively you can use models such as ISO 9126 or ISO 25010 to get started. Bear in mind that these models contain quite abstract terms; you will need to be more specific and make sure what your stakeholders mean. Abstract terms like “reliability” and “maintainability” can mean very different things to different stakeholders. Also, these models are aimed at software engineering and systems engineering disciplines; other types of products may require very different quality aspects. For example, clothing may need to be comfortable and food may have to be nutritious or tasty.

Specifying the required quality levels

Once you have selected a few quality aspects, you can drill down to what it really means. Work with key stakeholders to define the “gist” or “essence” of what they mean. For example, starting with “maintainability” you may find that in essence what the stakeholders want is the ability to release new versions of the software without disrupting operations. This could then be your quality requirement.

The next step is to determine how to validate this requirement. Key considerations here are:

  • What level is required? In the above example you need to define what level of disruption is acceptable. Perhaps you also need to define types of releases or frequency of releases. The required level could be e.g .: “Less than 5 minutes disruption per release.”. It could also be specified per month, or be dependent on the type of release, to name a few possible alternatives.
  • How and how often are you going to measure the quality level? The example level above implies measuring when the system is already operational. This is often the best match with what stakeholders specify. However, if the system doesn’t meet the requirements, it is a bit late to find out at this stage – and probably costly to fix too. For crucial quality requirements I suggest trying to find early stage predictors which can be measured. In our example of releasing without disrupting operations, one possible design-time predictor could be the number of dependencies between system components. The higher the number, the more likely operations will be disrupted because components which are being used are more likely to need to be restarted. The difficult part here is finding appropriate thresholds: at what number of dependencies should we start to worry? Of course, if you release often (as with Agile approaches), you have more chance of measuring both the predictor and the actual disruption. This makes it more feasible to work out the correlation between these two measures and to then set target levels appropriately.

What is the right quality level?

If you end up in situation where you have no idea what the appropriate quality level should be, try one or more of the following:

  • Measure how well an existing system performs. Then determine how well the new system must do in relation to the reference system.
  • Specify a range by using multiple levels, for example a minimum level below which the product cannot be released and a desired level above which no more effort should go on improving this particular quality aspect. Note that for some quality aspects bigger numbers mean better performance (e.g. availability and throughput), while for others it is the reverse (e.g. response times, down-time).
  • Specify different levels for different parts of the system. For example, the response time for retrieving customer details may be quite different from the response time for printing customer details.
  • Examine the (estimated) cost of a few different quality levels. There are often “natural” thresholds to a systems performance: thresholds caused by limitations of the architecture for example. You may find that increasing availability from, say, 99.0% to 99.2% doubles the development cost. This may be enough to convince stakeholders to settle for a lower availability.

Just do it

I hope these tips will encourage you to give quality requirements sufficient attention. If you are still concerned that it is too difficult, remember that just finding out which quality aspects are most important may help the team to design the system in such a way that it can perform well in those areas. That alone may be the difference between success and failure.

Robert van Lieshout

Seeing what isn’t there

Some time ago I shaved off my moustache (yes, really). I’ve had a moustache all my adult life, so it felt like a big change. My face seemed very strange whenever I looked into the mirror the first few days. Yet very few people commented on it. Not even my mother noticed that I had removed my moustache, although she did straight away notice my new shoes.

This shouldn’t have surprised me. If I had been less self-centred I might have realised that:

  1. The human brain is very good at recognizing faces. Once a face has been recognized, there is usually no need to analyse the details of the face (this is a simplification; at least the face may be checked regularly for changes in facial expressions).
  2. It is more difficult to notice something is missing than to notice something new that is visible. The absence of moustache is less obvious than the presence of new shoes.

In our observations we are constantly working with models and assumptions. This is usually beneficial: we cannot possibly evaluate all the signals we receive. We ignore most of it and filter out what we deem relevant. The parts we filter out are partially replaced with models and assumptions: our brain knows what a face looks like, and what the room looks like. It can fill it in from memory, doesn’t need to ‘redraw’ it every second from what the eyes make out.

This works quite often, but it is not foolproof. There are many optical illusions that rely on the assumptions of our brains. You’ve probably come across some yourself.

The same mechanisms apply to other types of processing: listening and understanding, for example. How well did you really listen during that interview last week? Are you sure you attached the same meaning to each word as the person you interviewed? Just as important: did you hear what she did not say, i.e. which topics she (subconsciously) avoided?

As a business analyst I find it pays to be critical towards what is missing: which key requirements are missing, which subjects have we unconsciously skipped? Seeing those gaps is hard, though. I try to compensate with a structured approach using multiple viewpoints, and with frequent reflection on the completeness of the scope of my work. How do you compensate for your human shortcomings?

Robert van Lieshout

The mizuiro effect

Each language and each model has its strengths and limitations. A language can sensitize you to certain types of issues, but at the same time it may leave you with a blind spot for other types of issues. I call that the Mizuiro effect. A business analyst should be aware of the strengths and limitations of each language and each model (s)he uses. By applying at least two complementary languages or models, the business analyst can reduce the risk of omissions.

The linguistic relativity principle

In 1940 Benjamin Lee Whorf introduced the “linguistic relativity principle”:

“users of markedly different grammars are pointed by their grammars toward different types of observations and different evaluations of externally similar acts of observation, and hence are not equivalent as observers but must arrive at somewhat different views of the world”.

At first many people were sceptical about this principle. Nowadays there is a lot of scientific evidence to support a certain amount of influence of grammar on cognition. One example is the paper by Athanasopoulos et al: “Representation of colour concepts in bilingual cognition: The case of Japanese blues.

Japanese divides the blue region of colour space into a darker shade called ‘ao’ and a lighter shade called ‘mizuiro’. English does not have two distinct words (just ‘blue’ , which can be modified to ‘dark blue’ or ‘light blue’. The paper shows that Japanese bilinguals who used English more frequently distinguished blue and light blue less well than those who used Japanese more frequently. The authors conclude that linguistic categories affect the way speakers of different languages evaluate objectively similar perceptual constructs.

When I first read this, it reminded me of the “Eskimo words for snow” claim. This is the (apparently not entirely correct) claim that Eskimos have an unusually large number of words for snow.

Though this particular claim may not be entirely correct, recent research like “The case of Japanese blues” does show that language affects our perception (and possibly vice versa), at least to some extent. It seems each language has its strengths and weaknesses. My guess is that the Eskimo-Aleut languages are strong at specifying different snowy conditions, but weak at distinguishing varieties of tropical hardwood trees.

Strengths and limitations of language

The strengths and limitations of language also impact my work as a business analyst, in many different ways. For example:

  • Natural language is inherently ambiguous.
  • Subject matter experts often have their own specialized vocabulary.

Models and many requirements specification techniques are languages of a sort. I see them as highly specialized languages designed for a particular purpose. Being specialized exaggerates the Mizuiro effect: a specialized language is great for analyzing or specifying the kind of issues that is was designed for, but often hopelessly inadequate for other issues. Take use cases for example: they are great for identifying & specifying tasks to be performed by the system, but not so good for describing concepts and the relationships between concepts.

Complementary languages

If you are aware of the strengths and limitations of the languages, models and techniques you use (lets just call then languages for simplicity), then you can apply those languages effectively. In most cases you will have to use different languages, and those languages must complement each other: the strengths of one language make up for the limitations of the other. In that context, Stephen Ferg’s analogy with chocolate is quite entertaining.

This is true regardless of the development approach being used: waterfall, agile or any other approach shouldn’t rely on a single language. (Yes, dear Scrum practitioners, this applies to you too. Only using user stories to the exclusion of all else is risky. Why not throw in a data dictionary or the odd decision table?)

Further reading

The influence of the Mizuiro effect on business analysis & requirements specification has been recognized a long time ago, and there are many approaches to provide guidance on how to deal with it. A relatively old and very extensive example is the Zachman framework. My personal favourites on this topic are:

Ian Alexander. Ian’s book ‘Discovering Requirements‘ (with Ljerka Beus-Dukic) is based around a matrix consisting of requirements elements (stakeholders, goals, context, scenarios, qualities and constraints, rationale, definitions, measurements, priorities) and discovery contexts (from individuals, from groups, from prototypes, from archeology, from standards & templates, from trade-offs).

Soren Lauesen. Soren’s book “Software Requirements – Styles and Techniques” groups techniques into e.g. data requirement styles, functional requirement styles, functional details, interfaces, and quality requirements. He lists the advantages and disadvantages for each technique.

Ellen Gottesdiener. Ellen is my favourite when it comes to this topic. It features in all her books, but I particularly recommend her brand new book Discover to Deliver (with Mary Gorman). The book introduces an ‘Options Board’ with 7 product dimensions: user, interface, action, data, control, environment, and quality attribute.

Don’t be blue

We are all affected by the Mizuiro effect, and our requirements models are too. I try to turn it into my advantage by combining multiple complementary languages. How do you deal with the Mizuiro effect?