Linky #5 - From Learning Cultures to AI-Driven Development
A curated list of thought-provoking reads on learning cultures, AI driven development patterns, and a £1M UI bug.
Welcome to the Fifth edition of Linky (scroll to the bottom for past issues). In each issue, I'll highlight articles, essays, or books I've recently enjoyed, with some commentary on why they caught my attention and why I think they're worth your time.
This week's Linky No. 5 rounds up key insights shaping quality engineering, from the role of leaders in building learning cultures to the real-world cost of software bugs—like a £1M UI error. It explores how AI development patterns, from "vibe coding" to AI-assisted test automation, and questions whether LLMs can truly uncover unknown unknowns. Plus, a look at why rigid frameworks can hinder team growth and how AI-enabled development is evolving.
Latest post from the Quality Engineering Newsletter
My three takeaways from PeersCon: The Quality Engineering Mindset, QE’s links to Stoicism and what to communicate when you’ve got nothing to communicate. Also, introducing a new type of post my notes from the conference.
From the archive
Not all collaboration is equal. Some ways of working create friction, while others unlock real impact. This post shares a framework to assess how you collaborate and actionable steps to level up.
Leaders’ Critical Role in Building a Learning Culture | MIT Sloan
Found this via Emily Webber’s Awesome Folks Newsletter. It’s a paid MIT Sloan article, but the synopsis is really interesting:
Leaders embraced the role of learning facilitators and actively prioritized the development of employees’ systematic problem-solving abilities. By framing each problem as an opportunity for growth, they encouraged employees to focus on strengthening long-term skills rather than just resolving the issue at hand.
This ties directly to psychological safety. As leaders, we need to create environments where people can learn effectively with one another while doing the work—not just in classroom settings. By framing problems as growth opportunities, we make it easier for people to share, learn, and improve together.
As quality engineers, we play a key role in fostering learning cultures within our teams. We need to make it safer for people to admit what they don’t know, share what they don’t understand, and own up to mistakes. That’s how we get to the real issues and solve them together. How do you encourage learning in your teams? Read the paper
Paddy Power Had to Pay Out £1M Due to a UI Bug | BBC News
Found this via Five for Friday – Tooth of the Weasel.
After spinning the jackpot wheel, Mrs. Durber’s iPad screen displayed that she had won the "Monster Jackpot" of £1,097,132.71. But the gambling giant only paid out £20,265, claiming she had actually won the smaller "Daily Jackpot", with the difference attributed to a UI bug.
The judge sided with the winner, stating:
“What you see is what you get.”
You can’t just say, "Sorry, that was a bug," when real money is on the line. Bugs can be expensive, and this is a great example of how quality failures have very real financial consequences. What’s the costliest bug you’ve ever seen? Read more at BBC News
Vibe Coding from Andrej Karpathy
There’s a new kind of coding Karpathy calls "vibe coding"—where you fully lean into the vibes, embrace exponentials, and forget that the code even exists.
Sometimes, he doesn’t even touch the keyboard—just speaks to the LLM. I can see this being useful for quick, throwaway hacks to test something out. But what about long-term maintainability? A quarter of startups in YC’s current cohort have codebases that are almost entirely AI-generated (via TechCrunch). Feels like a fun experiment, but also a potential recipe for chaos. Read Andrej tweet for more on Vibe coding.
LLMs Are Only Good for Recombinatory Innovation? | Ethan Mollick's LinkedIn Post
If it turns out LLMs are only capable of recombinatory innovation (finding novel connections among existing knowledge), that would still be very useful. Most innovation is recombination and one of the big issues in science is that fields are too vast for scientists to bridge them to find connections.
This makes me wonder: if LLMs were used in testing, would they only be able to find issues in the Known Knowns and Known Unknowns they’ve been trained on? That’s probably still way more than any one person could ever know. But you’d still need humans to uncover the Unknown Unknowns. Read more from Ethan Mollick's post, and check out my post to learn more about The Testing Unknown Unknowns.
"I'm just being honest" is a poor excuse for being rude | Adam Grant on Substack's notes
"I'm just being honest" is a poor excuse for being rude.
Candor is being forthcoming in what you say. Respect is being considerate in how you say it.
Being direct with the content of your feedback doesn't prevent you from being thoughtful about the best way to deliver it.
I frequently hear "I'm just being honest" in our industry, and I think Adam identifies what many people overlook: the importance of respecting the other person and being considerate in how you express your thoughts. The Radical Candor framework and the concept of psychological safety both aim to address this. While we need to voice our opinions, it’s the way we say it that truly determines whether we are coming from an altruistic or egotistical standpoint. Read Adam's note on Substack.
Work is Changing | One Useful Thing
Work is changing, and we're only beginning to understand how. What's clear from these experiments is that the relationship between human expertise and AI capabilities isn't fixed. Sometimes I found myself acting as a creative director, other times as a troubleshooter, and yet other times as a domain expert validating results
One thing that seems certain with AI tools is acceleration—but how you use them depends on 1] How well you can prompt them, 2] How capable they are at what you’ve asked for and 3] How effectively you can assess and refine their output.
It’s more than just prompt engineering—it’s knowing what good looks like and iterating until you get there. How do you see AI changing the way we work? Read Ethans Hollicks post for more
The Best Framework is No Framework | Alan Holub
Frameworks are all the equivalent of taking a process that worked well for one team and forcing it down the throats of everybody else, and the results are the same ones that GM witnessed. Every team is unique. Every team needs to develop their own way of working. That has to be done from a position of knowledge, of course. Random flailing around is like 1,000 monkeys typing Shakespeare. Learning, then, has to be a normal work activity actively supported by the organization.
I’ve seen this firsthand when teams adopt Scrum. Without the autonomy to adapt, they stick rigidly to the process - even when it no longer serves them. That’s why I always encourage teams to focus on making their work environments healthier rather than blindly copying another team’s approach. How does your team evolve its ways of working? Read Alans post
Four Patterns of AI-Enabled Development | Patrick Debois
Patrick Debois shared four key patterns for working with AI:
✅ Producer → Manager – Supervising AI-generated code, reviewing and refining output.
✅ Implementation → Intent – "Vibe coding" with AI, focusing on what you want rather than how to implement it.
✅ Delivery → Discovery – Using visual mockups to instruct AI, letting product managers define exactly what to build.
✅ Content → Knowledge – Turning your codebase into a knowledge base for faster onboarding and knowledge sharing.
I attended a recent AI-enabled software development meetup, and Patrick (of DevOps Handbook fame) framed these patterns really well. I can see myself coming back to them when thinking about how to describe AI’s role in software development. Used in the right contexts, they can seriously accelerate product development. Which of these patterns have you seen in action? Read my LinkedIn post
A Glimpse into the Future of Automated UI Testing?
At the same meetup:
The second talk by Adrian Rapan was a great example of the Producer → Manager pattern in action. He used an LLM (Claude the LLM via Cursor the AI editor) to spin up a Playwright UI automation framework in minutes—a task that would normally take days. More importantly, this freed him up to focus on the real value: building tests with his dev team.
The setup took just over a week, but once done, it only required minor tweaks. He structured it through three files:
🔹 A test file describing behaviour (Given, When, Then)
🔹 A page object layer defining UI structure
🔹 A user journey layer outlining interactions
With that, the LLM built out the structure and initial tests in just 10 minutes. After that, Adrian took over—validating the output and refining as needed. The most interesting part? He wouldn’t actually use AI to build the tests. The real value, he said, is in the collaboration—working with the dev team to ensure testing doesn’t become another silo. Spot on. The future of SEITs isn’t just AI-enabled—it’s about understanding where the value is. Read my LinkedIn post
That concludes the fifth Linky. What do you think? Is this format appealing to you? Would you prefer more content like this or something different? Please share your feedback in the comments or reply to this email.