Linky #8 - Better Thinking, Better Prompts, Better Teams
From prompt engineering to team retros, this edition unpacks how we build quality into the way we think, talk, and work.
Welcome to the eighth edition of Linky (scroll to the bottom for past issues). In each issue, I highlight articles, essays, or books I've recently enjoyed, with some commentary on why they caught my attention and why I think they're worth your time.
This week's Linky No. 8 explores how we spot - or miss - signs of quality. From AI prompts to oil rig disasters, this edition covers where risk hides, how conversations shape culture, and why critical thinking still matters in a world increasingly mediated by machines.
Latest post from the Quality Engineering Newsletter
Quality engineering is not just a set of practices but something that can shape culture. It’s about how philosophy, frameworks, and tools work together to build quality into people, processes, and products.
If you’ve ever found yourself trying to explain quality engineering or how it relates to real impact, this one might help.
"Shift-left" is a great slogan and encapsulates a lot of what we're trying to do with Quality Engineering, but the problem with slogans like this is that we begin to assume everyone interprets them the same way. We're at a point now where we need to stop and check that we all understand. Otherwise, it risks becoming just another fad.
💥 Deepwater Horizon: Why “It’s Fine” Is Never Enough
At 9:45PM CDT on April 20, 2010, the Deepwater Horizon—an ultra-deepwater drilling rig owned and operated by Transocean and contracted by BP—was drilling in the Macondo Prospect about 41 miles southeast of the Louisiana coast when a catastrophic blowout occurred.
The explosion and fire killed 11 crew members, injured 17 others, and led to the rig’s sinking two days later.
The same blowout triggered the largest accidental marine oil spill in history, resulting in an environmental disaster that lasted nearly three months, releasing an estimated 4.9 million barrels of oil into the Gulf.
But quality isn’t just about what you see, it’s about what people say. When no one is incentivised to speak up, issues stay hidden. If people don’t feel safe raising concerns, you won’t know where to build quality in until something breaks. And by then, it might be too late. “If it’s not broken, don’t fix it” is not a quality strategy. Via by Jason Kunz on LinkedIn)
❓ Should Startups Hire QAs?
Callum Fullarton spoke with a CEO of a startup a few weeks back, talking about his growth plans and headcounts:
.... the key takeaway I took from the call was his opinion on hiring quality engineers and how he felt they cause more harm than good when you consider cost vs benefits/success that they bring to a team. He went on to talk about how if you’re hiring a strong developer/engineer with a quality mindset, the need for a specific quality engineer isn’t needed.
It’s a fair point. In the early “Pioneer” phase, product-market fit is everything. But if you succeed? You’ll need to scale. That’s when you enter the “Settler” and “Town Planner” phases - when quality engineering becomes critical to maintaining momentum, scaling safely, and building a long-term culture of quality. Learn more at Pioneers, Settlers, Town Planners – How Innovation Works
💬 Building Quality into Team Interactions
The quality of your conversations shapes the quality of your systems. This table of conversation types from John Cutler is a powerful tool for assessing how your team communicates and collaborates. Use it in your next retro to look at:
What kinds of conversations do you have the most?
Are they fit for the kind of work you’re doing?
Where do you want to be?
A useful lens to build quality into your people and your process.
📸 Via TBM 353: Conversation Quality and Scale
🧑💻 Every Day Is Training Day
Because every interaction you have with me is a small rehearsal - for how you interact with the rest of the world.
So be kind, be clear, and be thoughtful. Your relationship with AI may just reflect (and shape) your real-world communication patterns. Via I asked GPT to respond to Sam Altman’s recent comments | Stephen Klein
🎯 Knowing when to take risks and when to play it safe
From James Clear 3-2-1 newsletter:
"In some areas of life, your reputation is defined by your wins.
- Creative pursuits. One bestseller or hit song can erase the memory of the ones that didn't work.
- Entrepreneurship. People rarely remember your failed business ideas, only your winning ones.In areas like these, your mistakes fade away and the breakthroughs last. But in other areas of life, your reputation is defined by your losses.
- Crime. You can drive safely 364 days a year, but one DUI changes everything.
- Ethics and morals. One unethical decision can ruin a reputation and destroy trust.In areas like these, your mistakes linger and consistency is rewarded.
Some areas of life reward your best day. Others punish your worst day. Know which situation you're in, and you can better decide when to be risky and when to play it safe."
This got me thinking, when are we in these situations at work? The first ones that came to mind were how you treat others. You always want to play it safe. Your reputation sticks, and if people don’t like you for the way you treat them, they’ll tell others. But with projects (better yet, experiments), you can probably take risks as the successes will stick in people’s minds, not the failures. Via 3-2-1: On how to handle idiots, pushing toward growth, and two types of choices in life - James Clear.
🤖 How to Fart Around with AI Effectively
From eleganthack.com via Emily Webber’s Awesome Folks newsletter:
Provide as much context to the AI as possible. Upload relevant documents—company mission, vision, OKRs, research. The more background AI has, the better its output. Use “projects” in Chaptgpt/Claude to save critical context.*
Give AI a role. Tell it what kind of expert it is—a senior copywriter, a SaaS marketing strategist, a data scientist. This improves relevance.
Outline before generating. Have AI create an outline first, then refine it before it writes. Try it both ways—you’ll see the difference.
Ask AI to critique itself. It’s surprisingly good at pointing out flaws in its own responses.
Tweak the tone. If AI’s output sounds too annoyingly perky or boringly generic, tell it to adjust. I often use “informal yet professional” as a tone guide.
Close-read everything. AI-generated text looks polished but can fall apart under scrutiny.
Ask for citations—and verify them. AI is improving at sourcing, but still hallucinates. This is happening less often than in the early days, but it still happens. Always check citations.
This made me think: AI is most useful when you already know the task well. Experts know what good looks like. They can check and adapt. That might widen the gap between beginners and pros - and that's why critical thinking is going to be essential. Via AI Won’t Take Your Job—But Someone Who Uses It Better Than You Will – Eleganthack
🧠 A Beginner’s Guide to Critical Thinking
I originally wrote this for the Ministry of Testing’s critical thinking series:
Critical thinking is thinking about your thinking while you’re thinking in order to make your thinking better.
This doesn’t just happen with regular everyday thoughts but when you purposely think with a goal in mind. Therefore for me critical thinking is all about building quality into your thinking.
It’s about reasoning with evidence, communicating clearly, listening actively, leading yourself, and reflecting with intention. These are the same capabilities we need to build quality into our work. Read the full post at A Beginner’s Guide To Critical Thinking | Ministry of Testing | By Jitesh Gosai
Which one resonated most with you this week? Hit reply or leave a comment - I’d love to hear your thoughts.