As I launch into the second post of the series it’s a hat tip to Angela Bishop who raised the topic of testers and intuition. Angela initially raised the following in LinkedIn (LI). As I re-read Angela’s response it seems to me that rather than just highlighting intuition she is creating a small list of what we might term “desirable attributes for testers”.
Without it, being a Professional Tester is fruitless.
Following your gut when a red flag goes up, and having the gumption to pursue it.
Not accepting the status quo and pushing back when you get the brush-off.
Standing firm and holding the line of quality assurance, even when everyone on the project is against it.
These qualities are what separate the Formula 1 Testers from the Tyre Kickers.
Angela and I had a bit of a discussion in the LI thread which you can find here if you wish to read it in full.
Intuition is an interesting topic. I strongly suspect that people that are good at their job have some level of intuition. But what is intuition? I thought it worth seeking out some definitions.
Intuition is a form of knowledge that appears in consciousness without obvious deliberation. It is not magical but rather a faculty in which hunches are generated by the unconscious mind rapidly sifting through past experience and cumulative knowledge.
Intuition is the result of your brain putting together everything you have learned from past experiences in your life to help you form a quick conclusion.
“Without it, being a Professional Tester is fruitless”
That’s a strong statement – “fruitless”. You put in effort to plough the field, plant the trees, water them, fertilise, keep pests at bay and when we get to harvest time and there is no fruit on the trees. The term goes back to Middle England and literally means “without fruit” So, is a tester without intuition, spending effort for no reward? Does it make a difference if we remove “professional” from the statement?
I’m struggling to agree that without intuition a tester can’t provide value. “Tester” can represent a role that somebody steps into for the purpose of discovering some information about the software (which can include confirming a behaviour). A little while back I wrote a blog exploring the “not everybody can test” argument. I think it has relevance here. If a tester is running a series of checks, confirming outcomes to predesignated (acceptable) outcomes, and only that, then there is an argument that by executing the work they have been tasked with they are providing the requested service and providing the required value. That would mean, in this context, the tester’s work is not fruitless. However, I struggle to reconcile the above as being the limit of what a professional tester would do within a given test mission.
In the context of being a professional tester I expect that, even when required to check behaviours observation, analysis and curiosity will result in some level of exploration when an unusual or unexpected outcome is observed or suspected (and remember, if you follow the Rapid Software Testing (RST) approach to testing then checking is a tactic of testing – think of it as a sub activity of testing). Observing an unusual outcome doesn’t necessarily display intuition but following up on a suspected difference does. These are the times where, as a tester, you observe a behaviour and decide that something is possibly “not quite right”. These are the “gut feeling” moments where sometimes we are wrong but often we are right and there is a problem lurking in the shadows. For me this sometimes results in bugs that when the developer asks “how did you find that?” I tend to respond with “not exactly sure, I picked up a scent and followed the trail”.
There is a concept I learnt about in RST – focus and defocus. In focus mode you are “in the forest” You are examining small details, you are up close and examining things in detail. When you defocus you are above the forest, looking down. The minute details have gone but you can see the entire forest, all of the small details joined up into the “big picture”. For my own purposes I also think of defocus as those times when I completely switch off from focusing on some specific analysis or problem decomposition. I mention this concept because, in my experience, these are important to moments of intuition. The time and space to allow your mind to create connections is a key ingredient. I used to carry a note book with me as some of my best “aha” moments around bugs or test ideas were in the 10 minute walk to the train after work. I have those moments while at work as well but I’m fairly comfortable that having regular defocus time is an input to those moments.
Does a tester need to have intuition – I’m inclined to say “no”. However if you are advertising yourself as a professional tester then I’m inclined to say “yes”. I’m inclined towards “yes” because a great tester learns from previous experiences, both good and bad. They take note of how parts of the system work. They acquire knowledge through interacting with the software, developers, analysts, designers and whomever else is available to help build knowledge and context. The more you do in this “knowledge accumulation space” the more likely, I think it is, that testers will be assisted by intuition. It reminds me of Obliquity. It is (very likely) pointless setting out to develop intuition. Instead you develop it by attending to all the “small things” that will enable some level of intuition. Writing this has also made me think of something I say when explaining my testing to non testers and getting the conversation beyond finding bugs. I have, more than once, stated “I don’t go hunting for bugs. I go hunting for information about the system using an approach that makes finding important bugs inevitable”. For me, that’s an important distinction and maybe it contributes to intuition.
“Following your gut when a red flag goes up, and having the gumption to pursue it”
I think this is a quality of a great tester more than intuition. I call this “being brave” , “being honest”, “being principled” or simply “caring a lot”. It can be challenging when there is a push to get code out the door to customers. There can be enormous pressure to tell people what they want to hear. People can be reluctant to put their hand up and say “hey we have a (nasty looking) problem here”. I have been in software development long enough to be able to relate stories about times serious issues were ignored and not reported (pretended not to have been discovered) rather than the tester becoming a “target”. In my mind this is quite a simple process. If I find a problem I report it to the people I need to keep informed. I back the report with as much evidence as I can. Weirdly, I’ve worked in places where this is not welcomed behaviour. Actually, it’s not so weird because, in these environments, it forces Managers to make uncomfortable decisions about releasing software. I’ve even been questioned about why I didn’t find a particular blocker a few days earlier. If you ever want to cut this type of Q&A short just point out that the breaking change was made only an hour before. In more recent times, I’ve never encountered an issue with disclosure of “bad news” or finding it close to a planned release as the company had much better information systems internally and externally coupled with a “no blame” culture.
“Not accepting the status quo and pushing back when you get the brush-off”
I’m interpreting this as not accepting responses such as “no user would do that” , “that’s not a real bug”, “users aren’t supposed to do that” and similar (by the way – I’ve had all of the example phrases thrown at me, and more). I was once described by a Manager, in a performance review as a “terrier with a rug in its mouth” when it came to chasing down problems. Apparently I had a reputation of advocating for bugs or problems until they were either fixed or I was happy with the explanation for why the problem would not be fixed. I think this is an important aspect of being a good tester. I suspect that almost 20 years after that observation from my Manager I go about this with a slightly different approach and with more people skills, but the underlying principle of advocating for good outcomes still holds strong in my approach. However, there comes a time, where you have provided a robust argument for change but you are unable to persuade those making the decisions to make the change. In these circumstances you have to learn to either drop the advocacy (“I put up a good fight but I understand the reasons for not making changes”) or find other ways, present new evidence, to make a more compelling case for the change. Which of these 2 options you should follow is up to you.
“Standing firm and holding the line of quality assurance, even when everyone on the project is against it”
I see this as a possible continuation of the point above. I’m not entirely sure what is meant by “holding the line of quality assurance” really means. I don’t believe testers are in the business of quality assurance. I do believe the work of testers can contribute to improved quality but that’s not quality assurance. The scenario I see here is one where somebody within the team (could be a tester) points out that there is a problem with some aspect of quality but the others in the project disagree. In my experience that’s not an entirely uncommon scenario and I’ve been the “lone voice” before. “Standing firm” has it’s limits, especially when you are the only person amongst the group that perceives a problem. You can argue your point, present as much evidence as you can, try to influence others with reasoning BUT remember you are presenting an opinion not a “rock solid, it can only be this way, written in stone” directive. If you had a rock solid directive there would be no debate. We can be wrong, we can fail to process other significant points, we can be very closely welded to our own thoughts, our biases clouding our capacity for a change of focus. My take in these situations is that if I can’t influence the “others” to change then maybe they are right, at this time. Maybe I’m missing some information I need to make a better case and maybe, I’ve just got a bad take on it and the problem is not as big as I thought. I don’t have to agree with everybody else but I do need to respect their opinions and go with it. This is part of being a team and accepting a different opinion to your own is not a flaw in your testing or you as a tester.
“These qualities are what separate the Formula 1 Testers from the Tyre Kickers”
A “tyre kicker” is, essentially, a time waster. Somebody who will engage the time of a car salesman but with no intent of ever purchasing the vehicle. “Formula 1 tester” is a new term to me but I guess, in this context, refers to professional testers that care about their craft and how they go about providing value to those who engage them in test missions and other activities. If you don’t want to be seen as a “tyre kicker” then that’s in your hands to control. You need to build your credibility by showing people you can test well. You need to find important risks and collaborate in their resolution. You need to show you care by getting deep into new projects, learning and bringing new perspectives (which can include perspectives based on competitors products). You need to spend time explaining what you do. Target your colleagues, Managers, customers, anybody that is interested in how you go about executing good testing. Don’t be one of those testers that likes to represent testing as “special magic”. Be open, be clear and collaborate as it pays big dividends. You need to do the things that not only show you are “the real deal” but just might provide you with moments of intuition, moments that take you to discoveries that make other development teams want you involved in their projects.