Linky #21 - How Quality Emerges
On emergent quality, learning from mistakes, and why testers need to speak the language of decision makers.
This week I found myself returning to a simple but uncomfortable idea: most of the quality we experience in software is a side effect of how our organisations think, learn, and make decisions. Not the tests we write or the tools we use, but the behaviours we reinforce, the mistakes we learn from, and the language we speak when we raise issues.
Across the links this week there is a clear thread. They all touch on the conditions that cause quality to emerge, and the conditions that quietly erode it. Such as blame over learning, speed without certainty, the paper cuts that push users away or insights that are ignored because they are delivered in the wrong language.
Put together, they paint a picture of what quality engineering really is: shaping the structures, habits, and mindsets that help teams make better decisions under uncertainty.
Latest post from the Quality Engineering Newsletter
One thing I have been seeing a lot online recently is the idea that you cannot engineer quality, and therefore quality engineering is pointless. I agree that you cannot directly engineer quality into a system, but I disagree that the discipline is meaningless. I believe that what we can do is engineer the conditions that lead to more high quality outcomes, and quality engineering is about learning how to do this at scale.
You need to enjoy the process
Results tend to accumulate to the person who enjoys the lifestyle that precedes the result.โ
Quality engineering and building quality in is difficult work. You are often working against inertia and the status quo. If you do not enjoy the process, it can slowly grind you down. I remind myself of this a lot and I keep a few small side projects that I know I will enjoy and that give me little wins - to help refill the tank. Via 3-2-1: How results accumulate, the secret to motivation, and living with a calm persistence - James Clear
The Quality Death Spiral
๐๐๐ซ๐โ๐ฌ ๐ฐ๐ก๐๐ญ ๐ญ๐ก๐ ๐๐ฎ๐๐ฅ๐ข๐ญ๐ฒ ๐๐๐๐ญ๐ก ๐๐ฉ๐ข๐ซ๐๐ฅ ๐ฅ๐จ๐จ๐ค๐ฌ ๐ฅ๐ข๐ค๐ (๐ข๐ง ๐ญ๐ก๐ ๐ฐ๐ข๐ฅ๐):
๐. ๐๐ง๐๐ฅ๐๐ฌ๐ฌ ๐๐ฎ๐ ๐ฌ
๐. ๐๐ง๐๐ข๐๐๐ง๐ญ๐ฌ ๐๐ฏ๐๐ซ๐ฒ ๐๐ข๐ฏ๐ ๐ฆ๐ข๐ง๐ฎ๐ญ๐๐ฌ
๐. ๐๐๐๐ก ๐๐๐๐ญ ๐ญ๐ก๐๐ญ ๐๐จ๐ฎ๐ฅ๐ ๐ช๐ฎ๐๐ฅ๐ข๐๐ฒ ๐๐ฌ ๐ ๐ก๐๐ซ๐ข๐ญ๐๐ ๐ ๐ฌ๐ข๐ญ๐
๐. ๐๐๐ฌ๐ญ๐ข๐ง๐ ๐ญ๐ก๐๐ญ ๐ฐ๐จ๐ฎ๐ฅ๐ ๐๐ซ๐๐๐ค ๐ญ๐ก๐ ๐ฐ๐ข๐ฅ๐ฅ ๐จ๐ ๐ฆ๐จ๐ฌ๐ญ ๐ฆ๐จ๐ซ๐ญ๐๐ฅ๐ฌ
๐. ๐๐ก๐๐จ๐ฌ-๐ฅ๐๐ ๐ฐ๐จ๐ซ๐ค๐๐ฅ๐จ๐ฐ๐ฌ
๐. ๐๐จ ๐ญ๐ข๐ฆ๐ ๐๐จ๐ซ ๐ฅ๐๐๐ซ๐ง๐ข๐ง๐ ๐จ๐ซ ๐ข๐ฆ๐ฉ๐ซ๐จ๐ฏ๐๐ฆ๐๐ง๐ญ
๐. ๐๐ฏ๐๐ซ๐ฒ๐จ๐ง๐โ๐ฌ ๐๐๐ ๐ฎ๐ฉ
๐. ๐๐ง๐ ๐จ๐ ๐๐จ๐ฎ๐ซ๐ฌ๐โฆ ๐ก๐๐ซ๐จ ๐๐ฎ๐ฅ๐ญ๐ฎ๐ซ๐๐๐ง๐ ๐ก๐๐ซ๐โ๐ฌ ๐ญ๐ก๐ ๐ค๐ข๐๐ค๐๐ซ:
It becomes ๐ ๐ฏ๐ข๐๐ข๐จ๐ฎ๐ฌ ๐๐ฒ๐๐ฅ๐ ๐จ๐ ๐ฉ๐ซ๐จ๐๐ฅ๐๐ฆ๐ฌ ๐๐ซ๐๐๐ญ๐ข๐ง๐ ๐ฆ๐จ๐ซ๐ ๐ฉ๐ซ๐จ๐๐ฅ๐๐ฆ๐ฌโฆ
๐ฐ๐ก๐ข๐๐ก ๐๐ซ๐๐๐ญ๐ ๐๐ฏ๐๐ง ๐ฆ๐จ๐ซ๐ ๐ฉ๐ซ๐จ๐๐ฅ๐๐ฆ๐ฌ.
This is what happens when teams keep doing more of what they already do, instead of improving the system they are working in. It is a vicious cycle where each problem creates more problems. Continuous improvement exists to help teams escape this spiral by understanding how quality is being eroded and how to build it back in. This ties directly to my speed versus quality post. Via ๐๐๐ ๐๐๐๐๐๐๐ ๐๐๐๐๐ ๐๐๐๐๐๐ | Robert Meaney | LinkedIn
Human error is not a valid root cause
In continuous improvement, stopping at โhuman errorโ tells you nothing. It doesnโt explain why the mistake was possible, why it wasnโt caught, or why the system allowed it to happen in the first place.
Itโs about process, never people.
When you go deeper, you usually find unclear processes, poor layout, missing checks, conflicting priorities, weak communication, or tools that donโt match the task. Fix those, and you prevent the problem. Stay at โhuman errorโ, and it repeats.
In software it is far too easy to blame the person who triggered the event. But if you dig even slightly deeper, you find multiple contributing factors every time. (See my writeup of the CrowdStrike outage as an example.) If a post incident review lists โhuman errorโ without explaining the conditions that made the mistake possible, it is a sign the analysis stopped once someone was blamed. As QEs, this is the pattern we should be most wary of. Via Why blaming โhuman errorโ is a mistake in problem-solving | David Dawson | LinkedIn
To move fast you need to have high quality
Every change to buggy code is painful (and time-consuming), particularly when you have no tests because you have โno time to write themโ. The best way to move fast is to increase quality and be serious about testing.
4/12
Speed and quality are linked, but not in the way people expect. Speed comes from certainty. If you know your system will hold up, you can change it quickly. If you are uncertain, you either test more or take a risky guess and let your users discover the failure. Both erode trust. Once trust is gone, people will avoid touching the system altogether. Via @allenholub.bsky.social | Bluesky
Errors of commission vs error of omission
Ackoff posits that organizations, conditioned by educational systems, view mistakes as failures to be punished rather than learning opportunities.
He distinguishes between errors of commission (doing what should not be done) and errors of omission (failing to do what should be done). Because traditional accounting and management systems only track and punish errors of commission, managers are incentivized to do nothing or minimize change to avoid blame.
This โrisk-averseโ culture paralyzes innovation and rejects transformative frameworks like systems thinking. Ackoff proposes that organizations must record and monitor all decisionsโincluding the decision to do nothingโto validate assumptions and foster authentic learning.
This distinction matters. Organisations often treat mistakes as failures to be punished, rather than opportunities to learn. Errors of omission, where something important was never done, often go unrecorded. That means the organisation learns nothing. I believe that learning from the loss of quality due errors of omission and commission is one of the best ways to build quality in. Without this mindset, progress is always limited. Via Why Few Organizations Adopt Systems Thinking | John Pourdehnad | LinkedIn
Bugs as paper cuts
If youโre building your product, donโt sweat too much on error handling. Atlassian is a 23 year old company with $5B USD revenue, and they canโt even bother to add permission checks and error handling ๐คท๐ป
If you want to remove a version from Jira and donโt have the proper permissions, you will get a non-descriptive error, the version will be removed from the list (but returns after a page refresh) and the real error is hidden in the network response.
If it is good enough for Atlassian, it is probably good enough for your product
Small quality issues compound. This issue is not catastrophic, but is annoying, confusing, and invisible to most users. Itโs like a paper cut, one is ok but lots causes problems. The difference here is that Enterprise users often have no choice but to endure them. But consumer users do. They leave, and rarely come back - especially if they can get what they want elsewhere. This is where testers shine. A good tester would have spotted this early. Via If youโre building your product, donโt sweat too much on error handling | Remie Bolte | LinkedIn
You need to speak stakeholder language
Testers do not get ignored because their insights are wrong. They get ignored because the language they use does not match the way stakeholders make decisions. You can raise a valid risk, explain the context, and show evidence, and it will still be brushed aside if it does not sound like a delivery or business problem.
This is one of the biggest lessons in QE. You can raise a valid risk, with evidence, and still be ignored if it does not sound like a delivery or business problem. Understanding the language your stakeholders use is part of the job. Via Testers do not get ignored because their insights are wrong | Katja Obring | LinkedIn
Ban AI for all under 30s ๐คฏ
Oh no. Iโd ban them for under 30 year olds. The UK is not like China where the Ministry of Education identified critical thinking as a core skill back in 2018 and where the teaching of AI is under this critical thinking syllabus. Until we start taking critical thinking seriously, I would urge caution and give people a chance to develop the skills necessary.
Simon Wardley is not being serious (I think), but he does make a point. We do not teach critical thinking, yet it is the skill that determines how effectively someone will use new technologies like AI. The people who learn to think critically will leverage these tools far better than those who do not. Via X : Do you think social media should be banned for under 16 year olds? | Simon Wardley | LinkedIn and checkout my guide on critical thinking at MoT.
Past Linky Posts
Linky #20 - Quality Isn't Magic
This weekโs links are all around the same idea: quality isnโt magic, itโs how we make choices. Like how we give feedback. How we talk about roles like tester and QE. How we build trust in our systems. And how we use AI without letting it quietly pile up future debt. Nothing here is flashy, itโs the everyday behaviours that shapes how our teams learn and build better software.
Linky #19 - From Shift-Up to Showing Up
In this weekโs linky, we look past the language of shifts and frameworks to focus on the work that matters: connecting with people, handling change safely, and practising what we learn every day. Real quality starts with how we show up for it.
Linky #18 - Seeing the System
This weekโs Linky is all about systems. Whether itโs how a tiny bug caused a global outage, why metrics can mislead us, or how teams can better support neurodiverse colleagues. Every post is about the environments we create and how they shape the quality outcomes we care about.







