The 2025 Quality Engineering Index: 100 Insights
100 actionable quality engineering insights from 2025, grouped into 10 themes to help you diagnose problems and choose your next move.
If this is your first edition, Quality Engineering Newsletter (QEN) is where I try to make sense of what’s happening in quality engineering and why it matters for the teams building real products.
This year, I started a new QEN format called Linky. Every two weeks, I shared what I’d been reading and a short note on why it mattered for quality engineering.
Below are 100 actionable or diagnostic insights from those reads, grouped into 10 themes. Scan the themes, then click what matches the problem you’re dealing with right now.
If you enjoy this format, I post Linky every two weeks with new reads and practical quality engineering commentary. Subscribe to get the next one in your inbox.
Themes
1. Define quality and ownership
Define quality in stakeholder terms and make ownership explicit enough to act on.
2. Speed up feedback and testability
Design feedback loops and testability so teams learn earlier and ship with confidence.
3. Automation that pays off
Build automation that reduces risk and cost of change, instead of creating maintenance overhead.
4. Make better decisions under uncertainty
Turn uncertainty into small experiments, using constraints and heuristics to guide decisions.
5. Learn from incidents and build resilience
Surface hidden dependencies and redesign the conditions that make failures likely.
6. Build a quality culture people can actually work in
Make it normal to speak up through psychological safety, respectful challenge, inclusion, and deliberate collaboration.
7. Influence and communicate so quality work leads
Speak stakeholder language, align to outcomes, and build influence without authority.
8. Metrics and signals that guide
Pick decision-supporting signals, design incentives deliberately, and keep data anchored in context.
9. Use AI safely without losing judgement
Use constraints and verification so AI supports drafting and exploration without outsourcing judgement.
10. Make improvement stick through habits and practice
Use small steps, routines, and reflection loops to make progress sustainable.
100 Quality Engineering Insights
Scan the theme headings, then click what matches the problem you are dealing with right now.
1) Define quality and ownership
Teams get stuck when “quality” means different things to different people, and ownership is implied rather than explicit. You notice it when conversations spiral into last-minute testing, rework, or blame
Shared Ownership: Quality improves when responsibility is shared and explicit, rather than diluted behind slogans that leave nobody accountable. Link
Start With Values: Defining quality starts with stakeholder values, because even excellent engineering will disappoint when those values are misaligned. Link
Attribute Starting Point: Developer priorities for quality attributes provide a practical starting point for defining what quality means in a team. Link
Map Stakeholder Quality: Building quality in starts by mapping stakeholder quality attributes and treating quality as an engineering concern rather than a testing phase. Link
Quality Over Testing: Quality engineering focuses teams on improving quality outcomes, not increasing testing activity. Link
Confidence as Goal: Framing QA as building confidence shifts the conversation from counting bugs to reducing uncertainty. Link
Collaborative Testing Frame: Treating testing as confirming shippability supports collaboration and delivery flow more than adversarial bug hunting. Link
Tester Versus QE: Testing centres on risk discovery and information, while quality engineering centres on changing how teams build and sustain quality. Link
Capability Not Silo: Quality engineering works best as an organisational capability practised by teams, supported by specialists rather than gatekeepers. Link
Quality Is Changeability: A system’s quality is largely defined by how safely and confidently it can be changed. Link
2) Speed up feedback and testability
Most quality pain is feedback arriving too late. You notice it when defects are discovered in acceptance testing, regressions appear after release, or manual checks become the bottleneck.
Feedback Loops First: GenAI speed-ups require fast feedback and layered testing, otherwise manual testing becomes the constraint that limits throughput. Link
Testability Enables AI: Test-driven development and testability provide the fast feedback needed to use AI safely when outputs drift or ignore constraints. Link
Unit Tests Matter: Skipping unit tests might work temporarily, but it becomes increasingly risky as products scale and change accelerates. Link
Tests Make Change Safe: Small unit tests plus exploratory testing make change safer and reduce the long-term cost of low-quality code. Link
UI Test Layers: Separating visual, UI, and end-to-end tests clarifies purpose and enables smarter scheduling and pipeline gating. Link
Steel Threads First: Selective investment in quality along critical steel-thread journeys helps balance speed with durability. Link
Shadow Test: Shadow testing in production, comparing old and new behaviour on real data, can reveal correctness gaps that pre-release testing misses. Link
Tests As Tools: Well-designed tests act as small, composable tools that interrogate specific behaviours and strengthen development feedback. Link
Map Feedback Loops: Quality engineering reduces uncertainty by deliberately designing feedback loops across code, systems, and product decisions. Link
Feedback Rings: Visualising feedback loops helps teams choose the right signals to reduce uncertainty and guide quality decisions. Link
3) Automation that pays off
Automation fails when it grows without intent, strategy, or ownership. You notice it when tests are flaky, slow, hard to interpret, or expensive to change.
Speed Needs Strategy: AI tooling can accelerate test framework creation, but teams still need clear strategy, maintainability, and ownership to keep automation healthy. Link
Avoid Test Bloat: AI-generated tests can create false confidence and raise the cost of change through trivial, brittle, and implementation-coupled checks. Link
Agentic Testing Risks: As AI-assisted development accelerates output, teams need deliberate strategies for safe automation and controlling flaky tests. Link
Test Outcomes: AI-driven UI automation works best when tests express user outcomes, while teams still design checks for failures that do not block the happy path. Link
Enable Automation Capability: Test automation scales when teams build stable delivery foundations and enable others, rather than relying on a separate automation factory. Link
Make Systems Testable: Building reliable automation agents requires understanding system context and sometimes reshaping the product to be easier to test. Link
Start Small Testing: Starting with lightweight, pragmatic testing can unlock momentum towards stronger practices like TDD when teams would otherwise avoid testing entirely. Link
Continuous Refactoring: Agentic automation can help keep codebases tidy by refactoring and reducing cruft as systems evolve. Link
Risk Limits Automation: Knowledge automation scales information, but risk and coordination still demand skilled judgement in real-world systems. Link
No Straight Swap: AI-based testing can accelerate certain tasks, but it does not replace the need for clear testing intent, risk thinking, and human judgement. Link
4) Make better decisions under uncertainty
Teams rarely lack opinions, but they do lack decision hygiene under uncertainty. You notice it when “let’s just…” fixes repeat, plans continue despite rising risk, or confidence is based on vibes rather than evidence.
Reframe Uncertainty: Reframing uncertainty and taking small practical steps to reduce fear can help engineers build confidence and make better quality decisions. Link
Decision Tools: Improving quality requires better decision-making tools, not just more automation. Link
Experiment To Learn: Experiments turn uncertain decisions into learning cycles, so teams can adjust quickly and judge outcomes based on evidence. Link
Use Heuristics: Simple heuristics help teams move forward under uncertainty, as long as they revisit them when new information emerges. Link
Understand Before Judging: Better quality decisions come from understanding the system before judging it, including being willing to say there is not enough information yet. Link
Think Out Loud: Writing out reasoning step by step improves problem solving and helps teams catch flawed assumptions before they ship. Link
Write to Understand: Writing forces clarity about what is and is not understood, which makes improvement work more deliberate. Link
Find The Constraint: Improving flow starts with identifying the current constraint in the value stream and optimising around it before tackling the next bottleneck. Link
Stop The Plan: Teams should watch for plan continuation bias and create safe ways to stop or pivot when evidence shows rising risk or low value. Link
Respect Constraints: Technology strategy fails when platforms are forced beyond their design constraints, so quality decisions must respect architectural reality. Link
5) Learn from incidents and build resilience
Reoccurrence of incidents are more likely when reviews chase certainty instead of learning. You notice it when “human error” becomes the explanation, root causes are too neat, or the same class of failure returns in a new form.
Systems Over Blame: Incidents rarely have a single root cause, so focus on understanding the whole system and its conditions. Link
Beyond Root Causes: For incidents involving people, blameless learning and sensemaking beat root-cause certainty and quick fixes. Link
Context Over Blame: Incident analysis should focus on how system design and context shaped actions rather than labelling outcomes as human error. Link
Beyond Human Error: Post-incident reviews should treat ‘human error’ as a prompt to redesign systems and working conditions, not as a final cause. Link
Hidden Dependencies: Resilience work must surface hidden dependencies and rehearse failure because complex cloud systems are hard to test end-to-end. Link
Study Past Incidents: Studying past incidents builds shared understanding and reduces the chances of repeating the same failures in new forms. Link
Map Causal Systems: Complex incidents need system-mapping approaches that explore interacting causes rather than linear ‘5 Whys’ stories. Link
Systemic Outage Lessons: Large outages often come from organisational and process breakdowns that turn small defects into system-wide failure. Link
Omission Matters: Quality declines from both active mistakes and missed actions, so teams must look for errors of omission as well as commission. Link
Design for Human Error: Human error is inevitable in socio-technical systems, so design work and controls to anticipate mistakes rather than blame individuals. Link
6) Build a quality culture people can actually work in
Quality work stalls when people do not feel safe to raise concerns or challenge decisions. You notice it when risks are shared privately, not publicly, or when candour turns into conflict avoidance.
Safety Speeds Learning: Psychological safety enables faster issue discovery and correction, which is a core driver of high-performing quality outcomes. Link
Speak Up Early: Quality risks stay hidden when people do not feel safe to speak up, so psychological safety is a core quality control. Link
Safe to Dissent: Psychological safety is the ability to speak up in discomfort, including dissent and disagreement, without fear. Link
Candour With Respect: Effective feedback pairs candour with respect, so people can hear the truth without losing psychological safety. Link
Respectful Challenge: Effective quality challenge combines directness with care, so tough feedback lands as collaboration rather than conflict. Link
Design For Inclusion: Psychological safety increases when teams design working practices that reduce friction for neurodivergent colleagues. Link
Assume Good Intent: Extending trust early and assuming good intent helps quality engineers build the relationships needed to surface issues and collaborate on fixes. Link
Collaborate Deliberately: Choosing the right level of collaboration reduces friction and increases the chance that quality practices stick. Link
Culture Shapes Rituals: Rituals only work when they reflect the culture a team wants and reinforce the behaviours that create quality. Link
Normalise Not Knowing: Normalising “I don’t know” and “I was wrong” makes it easier to surface problems early and improve quality quickly. Link
7) Influence and communicate so quality work leads
Good quality ideas stall when they are framed as extra work instead of enabling outcomes. You notice it when stakeholders nod, then ignore it, or when quality is only discussed after failures.
Persuasion by Alignment: Persuasion works best when quality improvements are framed as supporting other teams’ goals rather than adding work. Link
Speak Their Outcomes: Quality messages land when framed in the outcomes each stakeholder cares about, such as revenue protection, productivity, or developer ease. Link
Translate For Decisions: Quality engineering succeeds when risks and trade-offs are explained in stakeholder language that supports real decisions. Link
Frame The Work: How work is framed shapes care and effort, so connecting tasks to meaningful quality outcomes reduces half-baked delivery. Link
Practical Transparency: Transparent leadership strengthens quality culture when it shares the right information with the right intent, without pretending openness means sharing everything. Link
Good Office Politics: Building quality often requires positive organisational politics such as relationship-building and stakeholder alignment. Link
Build Influence: Quality engineers increase influence by speaking up, teaching, supporting others, and building allies through visible strategic work. Link
Talk It Through: Open discussions on quality engineering help teams align on roles, trade-offs, and practical ways to build quality in. Link
Conversation Quality: Improving the quality of team conversations is a direct lever for improving the quality of the systems they build. Link
Check Engagement Assumptions: Perceived disengagement can be misread, so teams should test assumptions about how people participate and learn. Link
8) Metrics and signals that guide
Metrics often create the behaviour they measure, including unhelpful behaviour. You notice it when teams chase numbers, optimise locally, or treat dashboards as reality.
Watch Leading Signals: Quality culture change feels slow because results lag, so tracking leading indicators helps teams stay motivated and adjust early. Link
Metrics Create Games: Every metric creates incentives, so quality measures should be designed with likely behavioural side effects in mind. Link
Metrics Shape Systems: Useful metrics connect system signals to business outcomes and help create conditions where quality is the likely result. Link
Measure Environment: Team measurement should account for organisational context and prioritise improving environmental factors over ranking individuals. Link
Data Needs Context: Data improves decisions only when teams understand what their measures can and cannot represent instead of treating numbers as reality. Link
Define Healthy Signals: Even with self-testing systems, teams still need to define what healthy looks like and interpret signals in context. Link
Diagnose Environment: Balanced engineering metrics frameworks help leaders diagnose whether the environment is enabling sustainable delivery or drying into a desert. Link
Listen For Signals: Aggregating community feedback and usage patterns helps reveal what quality engineers value and where capability gaps sit. Link
Bugs Cost Money: Quality failures can have direct financial and legal consequences, so teams must treat user-facing defects as business risks. Link
Paper Cuts Compound: Minor defects compound into user frustration and churn, so paper cuts deserve attention before they become reputation damage. Link
9) Use AI safely without losing judgement
AI can accelerate output while quietly increasing risk if judgement is outsourced. You notice it when outputs sound convincing but are wrong, tests multiply without intent, or teams stop thinking in systems.
Augment Not Replace: LLMs augment quality work when used as collaborators, but engineers still need domain knowledge to evaluate outputs and apply judgement. Link
Protect Thinking: As AI speeds up work, teams must protect critical thinking and use domain expertise to validate outputs rather than outsourcing judgement. Link
Verify With Expertise: AI delivers the most value when used in domains where expertise exists to verify results and spot weak reasoning. Link
Verify Model Reasoning: LLM explanations can sound convincing while being wrong, so outputs should be validated with evidence and tests. Link
Draft, Then Verify: LLMs are most useful as drafting assistants, but quality work requires verification because persuasive output can blur fact and fiction. Link
Context Before Prompts: Clear context, boundaries, and shared testing intent help teams get useful LLM support without cargo-culting practices. Link
Constrain Clearly: When guiding AI tools, pair constraints with acceptable alternatives to avoid brittle interpretations and poor outputs. Link
Coach the AI: Treating AI like a teammate by asking what it needs improves outcomes because it creates a feedback-driven loop rather than one-way prompting. Link
AI Persuasion Risk: Because LLM outputs can be tuned to persuade, quality decisions supported by AI need safeguards against bias and introduced inaccuracies. Link
Guard Alignment: When teams fine-tune LLMs for quality tasks, they must treat model behaviour as fragile and guard against misalignment and abuse. Link
10) Make improvement stick through habits and practice
Most improvement fails because it is treated as a one-off initiative instead of a practice. You notice it when new approaches fade after a few weeks, or when learning does not turn into changed behaviour.
Work Smart Loops: Teams improve quality faster when they iterate between effort and reflection to learn how to work smarter, not only harder. Link
Habits Compound: Small, repeatable improvements compound into better quality outcomes when teams make good habits easier to remember and practice. Link
Protect Deep Work: Protecting important-but-not-urgent time enables the deep thinking needed to prevent quality problems before they become urgent. Link
Smallest Next Step: Starting with the smallest workable improvement makes quality practices more likely to stick and spread. Link
Question Evidence Develop: The QED cycle turns quality improvement into low-risk experiments grounded in questions and measurable evidence. Link
Study to Improve: Using PDSA emphasises studying work as it happens so process improvements are evidence-led rather than assumption-led. Link
Daily Improvement Lens: Small daily improvements compound in complex systems when they are aligned to a clear direction. Link
Always Practise: Treating quality work as ongoing practice encourages continuous improvement without chasing a finish line. Link
Enjoy The Process: Sustained quality work needs habits and rituals that make the process enjoyable and replenish energy. Link
Slow Builds Speed: Sustainable speed comes from investing in detail and foundations early so teams can move fast later without recklessness. Link
If this index helped, you’ll probably like Linky. Every two weeks I share what I’m reading and the quality engineering angle, plus occasional deeper posts on building quality into systems and teams. Subscribe if you want the next edition in your inbox.
In the new year: five Quality Engineering challenges from 2025, and practical 2026 moves.


