The Testing Unknown Unknowns
Testing is a feedback tool for reducing uncertainty. By categorising tests into Testing Unknown-Unknowns teams can better understand system behaviours and address gaps in their knowledge.
Testing is often reduced to binary categories—manual versus automated, pass versus fail. However, these limitations miss a more profound purpose: reducing our uncertainty about system behaviour. Shifting from this binary view toward seeing testing as feedback changes how we think about quality and risk within software systems.
So, how can we look at testing as feedback to reduce our uncertainty about how our systems behave?
The testing unknown-unknowns matrix
Taking inspiration from the 2002 United States Secretary of Defense Donald Rumsfeld's statement, "There are unknown unknowns", we can classify testing and other techniques into four main groups:
Known-knowns,
Known-unknowns
Unknown-knowns
and Unknown-unknowns.
Each level of the matrix requires different feedback techniques, shifting focus from confirming existing knowledge to discovering new insights.
Known-knowns
Known-knowns are things we know for a fact about the behaviour of our systems. They are things we are highly certain about and can be confirmed by testing regularly and repeatedly.
Feedback methods
Any form of automated code testing
For example, unit testing, contract testing, tests created through TTD, automated UI testing, etc, are all based on what you want to be confirmed in the test.
Most types of scripted testing
For example, scripted feature testing usually encourages people running the script to stick to the steps described.
Any kind of alerting system
Usually provided by whatever live monitoring tools you use, but alerts are all based on what you want to be told about the system. You can’t be alerted to things you didn’t know could happen.
The feedback techniques typically used here are all based on what we already know and want to confirm about our systems. They usually follow the "happy path" (using the system as intended). Feedback techniques here are pretty poor at telling you anything that deviates from the intent of the test/alert unless it somehow affects the path of the test/alert.
Unknown-knowns
Unknown-knowns are areas of your system that are being used for unintended use cases. These are not issues, but people are using your software systems to achieve a goal other than what you originally intended. These use cases could be features you might want other users to be aware of to increase usage or even things you might want to stop, as it could lead to your organisation being liable for your users’ actions.
Feedback methods
Observability tools that can help you explore how the system is being used in real-time by your users.
User feedback - through official and unofficial channels, e.g. social media or news reports.
End user interviews - proactively speaking to your users and asking how your system helps them achieve their jobs to be done.
To name a few
The feedback techniques typically used here are open-ended, and you can't predict the response or outcome until the event or action occurs.
Known-Unknowns
Known unknowns are areas of our systems where we are uncertain about their behaviour. We think these areas could have issues, but we will not know until we exercise that behaviour.
Feedback methods
Some types of scripted testing
For example, regression and smoke testing are usually based on what you know to be true about the system's behaviour, and you want proof that the behaviour has not regressed.
Risk-based testing is typically focused on manual testing based on the system's areas of change. This type of testing can be scripted or more open-ended exploratory (see exploratory testing).
Exploratory testing is an open-ended exploration of your systems based on your tester’s knowledge and domain experience. It is often quicker at finding issues within software systems than scripted testing, as less preparation is needed to design scripts.
Phased system rollouts, such as feature flags, blue/green deploys, canary releases, etc., allow teams to slowly roll out new features to users and check that they operate as intended.
Monitoring tools can provide information on the behaviour of the live system and are often used in conjunction with phased rollout techniques.
Any type of experimentation.
To name a few.
The feedback techniques for known-unknowns are ones that we can predict could happen but are unsure of the outcomes. In most cases, they tend to confirm what we assumed, but there are times when they detect new issues with the system’s behaviour.
Unknown-unknowns
The unknown-unknowns are the latent issues within our systems—problems we didn't even know could be problems or manifest when conditions line up just the right way.
Feedback methods
Observability - tools that allow teams to explore how the live system behaves.
User feedback - through official and unofficial channels, e.g. social media or news reports.
Exploratory testing - see known-unknowns.
In production defects.
To name a few
The feedback techniques for unkown-unknowns are often quite similar looking to known-unknows. The critical difference is that their varied nature allows the person conducting the testing to use their judgment and explore different avenues within the system as they interact with it. On the other hand, known-unknowns will usually focus on a predetermined test outcome.
How I use the testing unknown-unknowns
I like visualising the model not in a 2x2 matrix but linearly with just three categories: Known-knowns, Known-unknowns, and Unknown-unknowns. I tend to leave off Unknown-knowns to keep things simple. Some teams also combine these into the Unknown-unknowns.
When you visualise this grid linearly, you see how each relates to the other. I find the linear visual helps you connect how feedback moves from the unknown (uncertain) spaces to the known (certain) ones. But also why you need multiple feedback forms to uncover different information about your system. Only relying on a few types of feedback (usually automated and customer feedback) will likely expose you to more significant risks of quality issues slipping out. This results in your engineering team being completely unaware of it until someone tells them there is an issue or, worse, being exploited by bad actors.
In summary
Testing unknown-unknowns helps people see that testing is more than just about confirming something passes or fails but that it is one of the many feedback techniques. Feedback helps reduce the uncertainty about the behaviour of your software systems. Framing testing as uncertainty reduction gets people thinking about what type of uncertainty it reduces and why this is important.
Categorising uncertainty into the four quadrants of known-knowns through to unknown-unknowns helps people see that they will have gaps in their knowledge about their software system. Each quadrant is a nudge for teams to think about other ways to increase their awareness about their systems and whether they are taking full advantage of what they already have available. Testing unknown-unknowns helps teams adopt practices that make the system of work healthier for the outcomes they want and less of the ones they don’t.
Another aspect of testing unknowns is that it can also help people see what can be automated and, more importantly, what can't, but I’ll save that for a future post.