The more I read about AI and software delivery, the more I keep coming back to the idea that speed is useful, but understanding is still the constraint.
This week’s Linky is about what becomes more important when software work speeds up: judgement, shared understanding, good work habits, and how teams build quality in.
🎧 No time to read? Check out the audio version, where I talk through this week’s links, connect the dots across them, and share practical takeaways you can use with your team.
Latest post from the Quality Engineering Newsletter
Continuing from my queueing quality post, I’ve been noticing some interesting pushback from teams trying to remove the in-test column. Much of that pushback is not really about the column itself, but the signals it gives the team about progress, risk, and where work is getting stuck.
So this week’s post looks at some of that pushback, where it comes from, and what teams can do instead. I’ve also created a companion guide for quality engineers that goes deeper into each alternative and what to watch out for.
I’m starting to notice a broader pattern. A lot of teams follow agile delivery practices, but only have a surface-level understanding of why those practices exist. That becomes a problem when they are asked to try something different, because from their perspective the current way of working seems fine. If the purpose of a practice is unclear, changing it just feels risky.
AI not only speeds you up, it can pull you into working longer
In an eight-month study of how generative AI changed work habits at a U.S.-based technology company with about 200 employees, we found that employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day, often without being asked to do so. Importantly, the company did not mandate AI use (though it did offer enterprise subscriptions to commercially available AI tools). On their own initiative workers did more because AI made “doing more” feel possible, accessible, and in many cases intrinsically rewarding.
This is still early research, but it looks interesting. It suggests AI does not just help people do more work, it can also pull them into working longer and harder.
Instead of taking the natural breaks that often happen throughout the day, people fill that time with more prompting, more small experiments, and more iteration. Work that once felt hard to begin now feels easier to start, because the barrier is a prompt rather than the mental effort of figuring out where to begin.
That makes me think good work habits matter even more. Without boundaries, AI does not just increase output, it can also increase overwork. Via AI Doesn’t Reduce Work—It Intensifies It
The new silent killer of team velocity: cognitive debt
Cognitive debt tends not to announce itself through failing builds or subtle bugs after deployment, but rather shows up through a silent loss of shared theory.
The story of a team slowly losing the ability to make safe changes to their system feels very plausible. As AI accelerates everything, our speed is eventually constrained not by the AI, but by how well we understand how the system fits together and works.
That constraint shows up when each new change starts breaking something else. At first glance, that might look like poor architecture, and sometimes it is. But it can also be a sign that the people maintaining the system no longer understand how the parts relate to one another. A change looks simple enough, but has unintended effects on adjacent areas.
It makes me think the idea of a tiny team building and safely maintaining huge systems is probably unrealistic. The knowledge of how a system works and fits together has to exist somewhere, and if it does not exist in the minds of the people maintaining it, making safe changes becomes much harder. Understanding how quality is created, maintained, and lost within a system is going to matter even more in the future. Via How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt
AI will likely lead to more software, not less
...we’ve found that every increase in the efficiency of software programming - the move from lower-level to higher-level languages, the emergence of frameworks and libraries, the endless move away from bare metal as compute becomes ever-cheaper and more abundant - has resulted in a dramatic increase in demand for software engineering labor.
AI will most likely be no different. At the moment, it seems more likely to increase people’s productivity than replace them outright.
If AI is going to accelerate everything, then we need to get better at how we produce software. If your processes are already degrading quality, AI is just going to accelerate that too. And if the bottleneck remains human judgement, then building quality into our teams, habits, and decision-making becomes even more important. Via Why I’m not worried about AI job loss - David Oks
Speed and quality of execution may matter more than the code itself
When you look at what actually makes a tech org’s business model defensible, it’s usually elsewhere:
proprietary data
deep domain expertise
distribution and brand
network effects
speed and quality of execution
It’s that last point, speed and quality of execution, where quality engineers have real value to bring. Not by acting as a gate at the end, but by helping teams understand how to move quickly without degrading the system around them. Via Debunking the myth of code as a competitive advantage | Rob Bowley | LinkedIn
Good tools amplify testers, not replace them
Good tools don’t replace testers. Good tools amplify them. Instead of spending hours maintaining scripts, testers can focus on intent. On risk. On scenarios that matter. Instead of fighting the tools, they direct them.
I think this goes wider than testing. A lot of the rhetoric around AI is that it will replace people, but at the moment it looks much more likely to amplify them.
And if that is true, then the value of human judgement does not disappear. It becomes even more important. Via Testing at the Speed of AI | Alan Page
If the robots can do my job, what do I do?
I was reading Vernon’s post about how AI has left him feeling exhilarated, daunted, and as if he is having an existential and identity crisis all at the same time. I could relate to that. Not that long ago, I think I would have responded in a very similar way.
For me, what used to sit underneath that kind of reaction was uncertainty. I liked knowing what I was doing, where I was going, and how things were going to work out. When I did not have that, I would start overthinking, then worrying, and then spiralling.
But with AI, it has felt different. I hear the hype and the doom and gloom, but I’m not too worried. Not because I know what is going to happen - it could be amazing, terrible, or something in between. What feels different is that I trust myself more to adapt to whatever comes next.
One thing I’m trying to get better at is sitting with ambiguity for longer. Not rushing to resolve it. Not needing immediate certainty before I can move. That has been uncomfortable, but over time my tolerance for uncertainty has grown, and with it some of the overthinking has eased too.
A lot of that learning came from The Uncertainty Toolkit, which is worth checking out if uncertainty is something you struggle with too.
TDD: where did it all go wrong?
Ian Cooper’s talk “TDD, where did it all go wrong?” is still one of the best explanations of how it drifted, then got blamed when it became painful. And importantly what you should actually be doing instead.
Highly recommend watching this talk if you’ve not seen it. It covers what went wrong as teams tried to adopt TDD, where BDD came from, and why architecture matters so much when building systems this way.
This feels especially relevant right now. If teams are going to use TDD to help maintain AI-generated code, then we need to better understand what TDD was actually trying to achieve in the first place. Via TDD, Where Did It All Go Wrong (Ian Cooper) | Rob Bowley
Close
The thread running through all these posts is that while AI increases the speed of work, it does not reduce the need for judgement. It increases it. Along with the need for shared understanding, good work habits, and strong feedback loops.
AI accelerates whatever a team is already doing. If their ways of working are helping them build quality in, then thats a good thing. But if those ways of working are hiding risk, reinforcing handovers, or degrading quality, AI is likely to accelerate that too.
For quality engineers, the opportunity is not to be the people who slow things down, or the people who panic about the tools. It is to help teams move with more awareness. To bring better signals, clearer judgement, and a stronger understanding of how quality is created, maintained, and lost within the system.
It’s not about fighting the speed, but helping teams use it without losing understanding.










