Three Quality Engineering challenges of 2025, and what to do about them in 2026
Three recurring problems, three practical approaches each, and a tiny experiment to kick things off
If this is your first time reading, QEN exists to help people make sense of quality engineering beyond slogans, silver bullets, and surface-level debates.
At the start of a new year, it’s tempting to wipe the slate clean - new year, new you. The problem is that new initiatives start with the best intentions, then fizzle out as delivery pressure kicks in.
This year I wanted to give teams something grounded in what we learned last year, why it matters, three practical approaches to keep things moving and a tiny low-effort experiment to get things kicked off.
Each of these challenges is grounded in my 2025 Quality Engineering Index with links back to this so that you can dig deeper for more ideas.
I hope these can be the nudges we need to shift our socio-technical systems toward better quality, rather than the status quo of moving fast and hoping for the best.
Challenge 1) Quality is still a slogan, not a shared understanding
What showed up in 2025: Many teams still do not share a definition of quality, so it ends up meaning different things to different people. This results in ownership being implied and quality becoming an afterthought rather than something that was built in from the start. If this is your pain, see the theme: Define quality and ownership.
How to address it in 2026:
Technical approach: Agree 3-5 quality attributes for one product area and translate them into engineering focus areas such as testability, resilience, observability, performance. (See insight 2 - Start with Values and 4 - Map Stakeholder Quality)
Social approach: Make ownership explicit, such as who decides, who builds, who validates, and who monitors, so “shared ownership” becomes everyone’s responsibility. (See insights 1 - Shared Ownership and 9 - Capability Not Silo)
Measurement approach: Define what good looks like for those attributes, including the signal you will watch and what would trigger a rollback. (See insight 6- Confidence as a Goal and 10 - Quality is Changeability)
Tiny experiment:
“Can we spend 30 mins this week agreeing on the three quality attributes that matter most for [your product] right now, and write down who decides, who builds, who validates, who monitors for each?”
Challenge 2) Output is accelerating, but feedback is not
What showed up in 2025: With the constant pressure of better, faster, cheaper, we’re delivering more and more, but a lot of teams still lack fast, reliable feedback. So teams fall back to manual checking, late discovery, and long hours to make the date. If this is your pain, see the theme Speed up feedback and testability.
How to address it in 2026:
Technical approach: Pick 2-3 critical journeys and build steel-thread feedback paths end-to-end, including fast local checks integrated into CI and safe in-production learning where appropriate. (See insight 16 - Steel Threads and 19 - Map Feedback Loops)
Social approach: Agree what “shippable” means for that journey up front, so late-stage testing is not the default negotiation point. You know immediately when an issue is found, whether you're shipping or not. (See insight 19 - Map Feedback Loops and 20 - Feedback Rings)
Measurement approach: Map the feedback loop and track one lead-time metric for learning (time from code change to reliable in-production signal), not just time to deploy. (See insight 11 - Feedback Loops First and 20 - Feedback Rings)
Tiny experiment:
“In planning today, can we pick one key user journey and answer:
Where do we first learn a change broke the journey (first three places, eg local dev checks, CI pipeline, post-deploy signals)?
Which of those takes the longest to give us a trusted signal (one we’d act on)?
Then let’s pick one small change this week to make that signal faster.”
Challenge 3) Automation is growing, but the payoff is slowing
What showed up in 2025: Test automation bloat is still a real problem for teams, as it grows beyond the point where it actually helps and becomes a new burden to manage and maintain. Without strategy and ownership, it turns into flakiness, slow pipelines and high maintenance costs. If this is your pain, see the theme Automation that pays off.
How to address it in 2026:
Technical approach: Rewrite or replace one brittle test with an outcome-focused test at the lowest level possible (unit/api over UI). (See insight 24 - Test Outcomes and 21 - Speed Needs Strategy).
Social approach: Assign explicit ownership for suite health within the delivery team (not outsourced). Make triaging flakiness, limiting bloat, and keeping test intent clear part of everyday work. All with a view to treating tests as a product. (See insights 25 - Enable Automation Capability and 21 - Speed Needs Strategy).
Measurement approach: Track test suite health signals such as flakiness rate, run time, triage time, and set thresholds that trigger cleanup work. (See insights 22 - Avoid Test Bloat).
Tiny experiment:
“Can we pick our single flakiest automated test, quarantine it, then replace it with one outcome-based check at a better layer, then track whether pipeline time and reruns improve?”
Close
There are many more challenges than the three I’ve highlighted, but these are the ones I see repeatedly. If you do not make time for quality, it will make time for you by slowing delivery, increasing rework, and eroding stakeholder trust.
Pick one challenge, choose one approach, run one tiny experiment, and you will create the conditions for quality to emerge more often than not by default.
Which challenge is the biggest for your team, and what experiment will you run in January? Reply to this email or let me know in the comments.


