In what is my first blog post for quite a while I’m going to look at the notion that “testers prevent defects”. I see this claim made by non-testers talking about testing (yes “agile” I’m looking at you as well as your coaches), professional testers and test consultancies. It must be incredibly enticing to issue claims that as a tester you prevent an unwanted outcome. That’s powerful, right? As a marketing tool, either for a company or a person, it’s a bold selling point.
The “prevent” claim raises a significant question for me. Is the statement credible and representative of what testing and testers can provide?
Let’s start by looking at the meaning of “prevent”.
From the Collins English Dictionary:
To prevent something means to ensure that it does not happen.
And from the Merriam-Webster Dictionary:
To keep from happening or existing
The use of “ensure” (to make sure or certain) within Collins’ definition is interesting. If you care to look at other dictionaries you’ll find that prevent has definitions consistent with the above selections
Broadly speaking software defects have two states when we consider observation:
- they exist and have been observed
- they exist but have not been observed
Within state 1 we know there is an issue because it has been observed. We have a record of it happening. Perhaps we have identified the specific conditions required to reproduce the problem and are able to analyse the issue. We might even agree that the outcomes are undesirable (a threat to value) and fix the defect. Of course we might also make a decision to not make any changes (a different topic).
State 2 is the “great unknown”. Issues are sitting in the product just waiting for somebody to stumble across them. To the extent that they remain “hidden” and do not threaten value, these are often ignored. Until they change into state 1.
For the purposes of this discussion let’s move on from state 1. Clearly there was no prevention because the problem has been observed (either pre- or post-release).
Before I venture further let’s consider a few places we might observe issues within software development while wearing a testing hat:
- Documentation – specifications, help guides, product claims
- Discussion – ideas, thoughts, queries about the software, specific to a set of changes or the product more generally
- Software – investigation of the product either in part or whole
As a tester, I:
- engage with issues by helping to solve them with other people. The issue might be that we need new, additional functionality to keep customers happy or that some part of current functionality is not working in desirable ways (ways that threaten value)
- provide evidence-based observations of what I have done during testing. What I have observed, my view of risks in the software. I’m likely to comment on things such as (but not limited to) ease of use, consistency in the application, issues I have found and how much of a threat they might be. My communications around testing can cover a lot of different considerations. Key to these observations, the “consistent thread”, is that I can back up my observations with evidence. If I’m asked to provide details related to my testing my response will not be “just feels like it” or similar. It will be backed by specific evidence.
If I claim that I “prevent issues”, how do I provide evidence that I prevented a thing that never existed? If my Manager (or anybody else) asks me to evidence the “issues I prevented” how would I do this? At best I could point to a trend of declining issues in production (which is an excellent outcome) but correlation does not imply causation. I get that it’s nice to think in this way but I actually want to see the link because that’s important feedback in improvement loops. How do you know you are preventing anything? Even small software companies have a myriad of changes happening in parallel. So which ones are working well? That’s a matter of evidence linking changes to outcomes. Good luck with that when you have no evidence (remember that the issue never existed).
It seems to me that a re-frame is in order. Let’s consider that by visiting those places I listed earlier where we might find issues. Documentation, discussion, software.
You’re reading through a specification and you find an error in a statement regarding functionality. To fix this you consult with the specification author and a change that corrects the problem is added. Cool you prevented an issue….except you didn’t. What you did is find something in the document that does not make sense to you. You detected a signal that there might be an issue here. When you discuss this with the document author, and they concur, then they will update the document to add clarity. But, and this is important, they may not agree and the document might not be changed. Regardless of whether the change is made, this is early detection, not prevention.
You’re in a project group discussion. The basic information flows of the project are being mapped out along with how data will be entered and interacted with by your customers. You notice a large inconsistency with a similar feature elsewhere in the software. This inconsistency would reduce usability and increase confusion so you point this out. Awesome, you prevented an issue……except you didn’t. Again you detected a signal that there might be an issue, you raised this with your colleagues, and further discussion and investigation is likely to follow. Perhaps this inconsistency, while not initially known, is now considered to be an important aspect of the project and will be retained. Again, this is early detection of an issue, not prevention.
You’re running a test session pairing with a Developer. During your exploring you observe that for a given set of values you receive different results each time you enter the values. Incredible, you prevented an issue…except in this scenario that’s not a claim you’re likely to make. Why? It’s really no different to my first two examples. The incorrect output is a signal that there is an issue. We have helped identify that further investigation is required so we can reconcile actual behaviour with desired behaviour.
When I see claims of testers, or testing, preventing bugs it seems to me that testing is being set up for failure by representing goals and outcomes it can never own. It is a confusion of what powers testers and testing possess. If I was a surgeon and you were a doctor off to the South Pole as part of a team, it is a requirement that your appendix be removed. As a surgeon I could, in this context, assert quite positively that, by removing your appendix, I have prevented you suffering an episode of appendicitis. Testing isn’t like that.
Testing is like this. You’re a passenger in a car, driving down a road that has a variety of speed limit signs. The car has a speedometer which you can see and you glance at it occasionally to check the car’s speed. Does the speedometer reading prevent the driver from driving over the speed limit (which is an issue)? It doesn’t. The speedometer provides you with a signal which you can either ignore or act upon. You might say to the driver “gee the speed limits change a lot around here, we just moved from an 80 km/h zone and into a 60 km/h zone”. The driver can choose to listen to you or ignore you. They might increase speed, decrease speed or stay at the same speed. Changing speed requires a direct input on the accelerator and it is the sole responsibility of the driver to make that adjustment.
As a tester you have a focus on the speedometer (and other conditions that are part of the context such as the weather, the road conditions, etc.). You are providing feedback, perhaps even encouraging slowing the car to a more appropriate speed. You are an observer of what is happening, not the driver who has control and can make changes based on your feedback. You are providing feedback that can be acted upon, but you’re not the person making the adjustments.
As I noted at the opening of this post, I’m very unclear why people really want to make the claim that they, or testing, “prevent issues”. Not only is that claim beyond the remit of testers and testing, it is damaging to testing. It denies the value and usefulness of detection, something that good testers bring to the table with each test assignment and discussion. My advice is to use your detection skills, scrutinise, explore, question, propose ideas, challenge and advocate. When you’ve done these things you can actually demonstrate how you have influenced product quality by talking about all those issues you have brought to light. That feels a lot like being an advocate for better quality in an authentic way.
A big thank you to Lee Hawkins (@therockertester) for his endless patience and quality feedback.
7 thoughts on “Testing and Prevention – The Illusion”
Is there a ‘right’ or ‘wrong’ answer here, or, as I suspect, a bit of both?
Is ‘prevent’ being defined correctly?
Is the meaning of ‘prevent’ being applied correctly, and consistently?
Does the car analogy hold any truth?
Can ‘prevention’ ever be proved?
What do ‘managers’ know that testers don’t? Why?
A thoughtful article!
I think when people speak of testers “preventing” problems, especially in a shift left situation where testers point out issues with specifications or work programmes, we’re looking at diverting effort down the road not travelled. Testers are blocking off choices or pathways that lead to bugs in some alternate reality. It’s a little like the butterfly effect or the science fictional conceit of each of our decisions throwing off whole parallel universes where you turned left getting off the bus one morning instead of turning right. The effect of this test activity is ultimately unknowable – so as an advertising claim, it’s very attractive, because it can neve be disproved!
I agree on some level with you, but not completely. I am one of the folks that say “testers can prevent defects” and I usually complete that sentence by saying “prevent defects from showing up in code”, although not always. If I am asking questions, and helping to uncover hidden assumptions, and I hear a developer …or someone say “I didn’t realize that”, we may have prevented a defect in the code. or sometimes it takes the form of a …. Ahhhhh… I try to call those instances out to show the value of having the same understanding.
One very specific instance of ‘preventing a defect in code’ was when I gave my tests to the developer before they started coding. He said – these tests won’t pass. We went to the PO and had a discussion. He would have coded it wrong. There would have been a defect in the code. And it would have caused extra rework. To me, that is an example of ‘preventing defects in code’.
“If I am asking questions, and helping to uncover hidden assumptions, and I hear a developer …or someone say “I didn’t realize that”, we may have prevented a defect in the code” . I’d suggest that what testing has done is flag that “there might be a problem here”. That signal is something we can use to alert those who do make changes (say the developer) to take preventative action. Even if we change this scenario to one where you are developer and testing – testing your code didn’t prevent, it alerted. You still have to make the changes in the code to prevent the problem. When you’re doing that, you’re a developer not a tester.
Again, in your second example, I see this as you initiated a conversation that clearly outlined what you believed system behaviour should be. The difference between your understanding and the developers signalled that there might be a problem to be fixed. Testing alerted but didn’t prevent. The change of code, by the developer was the prevention.
I’m by no means telling testers that they can’t claim they “prevent defects”, in my view though it not’s an accurate claim about what testing, or a tester, can actually do. “Testers provide information that can help others prevent defects from showing up in code” is how I would reframe your statement. Although with a little more thinking time I’d probably smooth that statement a little by being more specific about “others”.