This week’s Linky is all about the work that sits underneath good software delivery: communication, judgment, feedback loops, and the invisible effort that keeps teams moving.
🎧 No time to read? Check out the audio version, where I talk through this week’s links, connect the dots across them, and share practical takeaways you can use with your team.
Latest post from the Quality Engineering Newsletter
I’ve been getting into a lot of discussions recently about dropping the test column in teams. Interestingly, most of the pushback has not come from testers, but from developers. The main concerns seem to be around visibility of work, knowing where things are up to, and how to work asynchronously in hybrid teams.
I’ll be following up soon with some thoughts on how to tackle those concerns, but sharing the post on LinkedIn brought out some useful discussion.
One comment suggested I was confusing quality with testing because they did not understand what I meant by queueing quality. What I mean by that is the queueing of feedback and decision-making that happens when work piles up in an In Test column. The impact is usually delayed learning, slower flow, and teams optimising for being busy rather than actually moving work forward.
Another comment pointed out that the restaurant analogy could encourage management to jump straight to automation as the answer. That is a real risk. But it can also be a useful entry point for talking about the limits of automation, and the broader system issues behind the queue: testability, ownership, collaboration, risk, and decision latency.
In that sense, the pushback was helpful. It highlighted where the idea needed more explanation, but also surfaced the deeper tensions teams are trying to work through.
Time allocation is not equal to capacity allocation
Etch this into your product brain: time allocation != capacity allocation. Points (or even “flow”) allocation != capacity allocation. Both comparisons feel logical, yet are deeply flawed. I run into this every day when chatting with customers and prospects at Dotwork. The intent is good... but there’s a fundamental gap in terms of how the system is being perceived.
This is the same tension I was highlighting in my post on queueing quality, but from a product planning perspective.
When you allocate one tester to a team for 14 hours, that is time allocation, not testing capacity.
It does not mean they can spend 14 hours testing tickets.
In reality, they will:
attend meetings
read and understand tickets
discuss edge cases and changes with engineers
share feedback
retest fixes
investigate unexpected behaviour
By the time you account for all of that, they might only have 4 to 5 hours of actual testing time. But teams often assume that time allocated equals testing capacity. Then we wonder why the team only managed to test one ticket instead of three.
As quality engineers, we need to think in terms of the wider socio-technical system. Testing does not happen in isolation. To get the work done, a lot of other things need to happen around it. Some of those are predictable, and some only become visible once the work starts.
That is one reason smaller batches are better. Smaller pieces of work are easier to understand, easier to test, and easier to finish. They reduce coordination overhead, limit interruption, and help teams focus on completing work instead of starting more and creating queues. Via Time vs Capacity Allocation: A Common Misconception | John Cutler | LinkedIn
Which skills are we worst at self-assessing?
it’s communication & influence.
We do a lot of telling in our work. Whether it’s risks, test plans, bugs, we seem to spend a lot of time telling people stuff. And maybe with all that telling, we’re prone to get stuck in broadcast mode.
The trouble is broadcasting is only one aspect of communication. So here’s a list of communication and influence skills I believe all Testers should seek to improve:
Rhetoric and argumentation
Bug advocacy
Storytelling
Influencing without authority
Positioning and sales
Making work visible
Active listening
Vernon then breaks each one down and explains why it matters.
The two that most surprised me were positioning and sales and making work visible. I had not really thought of those as communication skills before, but he is right.
A lot of senior test and quality work depends on creating pull, not just providing answers. In practice, that often means helping teams understand the value of your skills, your approach, and the kind of support you can offer. In that sense, you are constantly positioning your work.
The same goes for making work visible. When you work across a department, a lot of what you do can easily disappear into conversations, coaching, nudges, and behind-the-scenes support. Helping people understand what you are doing, and how you are doing it, is not self-promotion for the sake of it. It is part of helping others learn, connecting the dots, and seeing where quality work is really happening. Via Boosting Testers’ Communication & Influence Skills | Vernon Richards | LinkedIn
We are going to need more software, not less, with AI
AI changes what we build and who builds it, but not how much needs to be built. We need vastly more software, not less.
Steven makes a really interesting point. With the rise of AI, we are going to hear from both the AI boomers and the AI doomers. That tends to happen in almost every hype cycle around new technology, and in truth we need both. We need the optimism to build the thing, and the caution to think about what the new thing might bring.
The reality, though, is that the future rarely looks like either group predicts. Instead, it becomes something no one could have fully anticipated. We live in a big, complex system, which makes accurate prediction extremely difficult.
For quality engineers, I think we need to take a more balanced view and help people avoid getting too caught up in either narrative. One point from Steven’s post that really stuck with me is that AI will probably result in more software, not less, as we start embedding it into things that have never had software before.
A lot of the doom around AI assumes it will take all the jobs, which would be a serious problem if the world were static. But with previous technological shifts, the market has often expanded and created new kinds of work rather than simply eliminating it.
That said, there is a genuine difference here. AI is automating both physical and cognitive work, and it is moving much faster than many past breakthroughs. So the doomers are not wrong to raise concerns.
And as for the boomers, things are rarely as good as they make out, but we need them too. They are often the ones pushing things forward and helping us discover what new possibilities are actually worth exploring.
All of that leaves me thinking that the work of quality engineers and testers will certainly evolve, but I do not think we are disappearing any time soon. Via 238. Death of Software. Nah.
AI hype still leaves a lot of quality work on the table
Anthropic is betting that the value shifts from people who can write code to people who can define what to build, sequence the work across parallel agents, and catch the 10% of failures that compound into production disasters. That’s the “builder” Cherny is describing.
This line caught my attention because it frames success around catching the 10% of bugs that cause disasters.
This obviously matters, but it still leaves a lot of quality problems on the table.
Users don’t judge quality only by whether your product avoids catastrophic failure. They also judge it by all the little frustrations that build up over time: awkward flows, confusing behaviour, rough edges, inconsistent responses, and things that technically work but do not feel good to use.
Those smaller issues may not bring production down, but they absolutely shape how people perceive your product. Enough of those paper cuts, and people start to think your software is poor. And if they think they can get a better experience elsewhere, they will leave.
That is one reason I think quality work goes beyond simply preventing disasters. Avoiding major failure is important, but so is shaping the overall experience people have with the product. Via Anthropic’s Cherny on the End of Software Engineering as We Know It | Maria R. | LinkedIn
AI can build and test, but can it judge well enough?
Let me give you an example so you can understand what this actually looks like in practice. I’ll tell the AI: “I want to build this app. Here’s what it should do, here’s roughly what it should look like. Figure out the user flow, the design, all of it.” And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn’t like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it’s satisfied. Only once it has decided the app meets its own standards does it come back to me and say: “It’s ready for you to test.” And when I test it, it’s usually perfect.
I am all for developers testing their own software, and everything that the AI system is described as doing is exactly what a good engineer should be doing.
But the phrase that sticks out to me here is: it is usually perfect.
That does not fill me with confidence.
The question is not just whether AI can generate code and run checks against it. The deeper question is whether the same system that built the thing can be trusted to judge whether it is genuinely good enough.
Experience tells me there will still be edge cases, awkward interactions, assumptions baked into the flow, and usability issues that affect how people experience the product. Not always dramatic bugs. Sometimes just the kind of issues that make software feel clumsy, frustrating, or slightly off. That is where an independent perspective is still important. Building and checking are not quite the same as judging. Who gets trusted to make that judgment is going to be interesting. Via Something Big Is Happening | Matt Shumer
AI is not coming for your job. At least not in the way people think
AI just doesn’t yield the imagined gains, at least not if you consider the entire product lifecycle and you want a reliable, secure, extensible, and scalable product that does what customers need. I’m not saying an LLM isn’t a useful tool, only that it’s not the panacea these execs imagine. The only way to speed up an entire system is to address the entire system, especially any bottlenecks. The bottleneck is only rarely the coding. Spotify, which has gone full AI to write its code, hasn’t laid anybody off because it understands these issues.
He goes on to argue that there is one place AI might take your job: if you are unwilling to learn how to work with it.
Alan then makes a point that feels particularly relevant:
You need to understand systems thinking and software architecture. You need strong written and verbal communication skills, as well as the social skills to make that communication effective. You need to talk to your customers and have empathy for their problems.
Those are all skills that quality engineers have been building for years while working with teams to help them build quality in.
That is why I think a lot of the conversation about AI replacing people is framed too narrowly. It assumes the valuable part of software work is mostly typing code. But when you look at the whole product lifecycle, the constraints are usually broader: understanding the problem, making trade-offs, coordinating people, interpreting signals, handling risk, and learning from feedback. In other words, a lot of those are people problems.
That work does not disappear just because code generation gets faster. In fact, people problems become the problems we need to solve. Via AI Is Not Coming For Your (Developer) Job - by Allen Holub
Past Linkys
Linky #26 - Failure, feedback, and unintended outcomes
This week’s Linky is all about how quality is shaped by more than just code. It is shaped by incentives, boundaries, feedback loops, and how we respond when things go wrong.
Linky #25 - Making learning easy in complex systems
This week’s Linky is all about learning in complex systems, and what gets in the way. Because we keep reaching for quick fixes (mindset shifts, root causes, other companies’ playbooks) and hoping for the best.
Linky #24 - Principles, Practice and Reality
This week’s Linky is all about how teams define “good”, then build feedback loops to protect it. From Continuous Delivery principles, to quality strategy, to the messy reality of UI and human behaviour. Plus: I’m experimenting with an audio version of Linky, let me know if it’s useful.












