Automation – what bugs me

When did we start to believe that automation in testing is intelligent? When did we start to believe that we can automate away human thinking within software testing? When did testers, many of whom rally to a cry of ” we are misunderstood and not appreciated” decide it would be a good idea to promote that automation in testing has super powers? Recent interactions on LinkedIn have had me pondering those questions.

You might notice that I use the term “automation in testing”. I use this term because it has resonated with me since I first heard the term via Richard Bradshaw (twitter handle @FriendlyTester). Automation in testing refers to automation that supports testing. It is a mindset and an approach,that has a meaningful and useful focus (you can read more here – https://automationintesting.com).

Let’s start with the claim that “automation in testing finds bugs”. I have no idea why it is such a steep hill to climb to suggest this statement is untrue. Automation in testing in a nutshell. A human codes algorithmic checks because (I hope) a decision has been made that there is value in knowing if “desired state A” changes to a different state (“state B”). There appears to be a reasonably wide held belief that if the automated check fails, because we no longer have “desired state A”, but instead “state B” then the automation has found a bug. This thinking shifts much focus from tester abilities and gives automation power it has no right to claim.

That the desired state does not equal the current actual state is a difference, a variance. It’s not a bug, it is a deviation from an expected outcome and a human is being invited to investigate the disagreement. As a tester, if you choose to tell people that the automated checks found a bug, then you might also be removing from the story that a tester, a human, was required to make any meaningful and valuable changes.

The automated check tells us there is a difference. Do we simply accept this and say “it’s a bug, needs to be fixed”? I don’t believe a tester worth their place in a software development company would even consider this an option. The first step is to dig in and discover where the difference occurred and the circumstances around the difference. Even something as simple as discovering the last time the test was executed can help us narrow down on possible changes that led us here. We will likely need to dig into the data, the new outcome and probably ask a bunch of questions to discover what the changed outcome means. Your automation code, the computers you are running the automated checks on, the code you are executing to run the automation – none of these can do the investigation you are currently running. You are looking for clues, evidence, using heuristics to help you make decisions.

Sometimes the investigation and collaboration leads us to conclude that indeed we do have an unwanted outcome. Who makes the decision that the difference is a bug? Likely it will be a collaborative effort. Tester, developer, business analyst, subject matter expert are just a few who might collaborate to find a solution. At no point has the automation made a decision that there is a bug. It is incapable of doing any more than pointing out that it expected “state A” but got “state B”. Equally, after investigating the evidence you might discover that “state A” is wrong. It might be wrong because there have been code changes that legitimately change “state A” so that we need to make changes in our expected results. It might even be that a change in code leads us to discover that “state A” has never been correct or hasn’t been correct for some time (I’ve seen this more than once). Please note carefully that the automation cannot decide between “bug” and “not a bug”, a human (or humans) do this.

What else might happen as an outcome of the above? It’s not unusual for our investigations to discover scenarios that are not covered by automated checks but that would be valuable to cover. We might find other checks running that are out of date or poorly constructed (that is, they will never give us any valuable information). We might even find other scenarios that will need changing because of this one difference. We might even spot signs of unwanted duplication. It’s pretty amazing what comes to light when you get into these investigations. There are a myriad of possibilities.

The one thing I really want to emphasise in this blog is that the computer, the automation, however you wish to refer to it, did not find a bug. It found a difference and that difference was found because a human wrote code that said that an outcome should be checked. Without the intervention of a human to analyse and investigate, this difference would have no meaning, no valuable outcomes. So if you want to elevate the standing of testers in the software community, it might be a good idea to take credit for your skills and contributions and not unthinkingly hand that over to a non-thinking entity.

Automated checks have value because of the information they can provide to humans. Consider that for your next conversation around automation.

Advertisement

7 thoughts on “Automation – what bugs me

  1. Interesting article. I agree that automated checks/tests are there so that when a variance does occur, it flags it to the team to investigate.

    At my previous organisation we used a tool called ReportPortal.IO. This uses machine learning to automatically classify failed automated tests which can label them to be “product bug”, “automation bug” “system bug”, etc. It will label the failed tests as “to investigate” to begin with. Over time, when you start labelling these failed tests, it will begin to “learn” the different types of failures and automatically label them for you with the type of bug it is. We found it extremely useful.

    Like

  2. Interesting article, but sometimes the deviation is called as bug (agree humans still have to review to term it as bug ), and teams are more interested in finding those deviations whether its called as bug or deviation and whether it came from automation or manual, but i agree automation cannot classify it as a bug or not, it can only state deviation and humans still have to make the decision about that being bug or not.

    Like

  3. I don’t think the evaluation of automation is a concern. Failures in automation are always evaluated by the engineer/tester. The issue is the creation of the automated scripts. The reason, ‘Automation in Test’ (Richard and Mark) is appealing is that it doesn’t use automation to create scripts which confirm requirements. It’s refreshing for someone to outright state the purpose of their automated scripts. I haven’t found a single person/blog/article/book which explains, ‘why automation’. When questioned, they always handwave, ‘doesn’t everyone use critical thining?’, ‘of course, testing is important’, ‘it’s just words’.

    In the case of Richard’s AIT, it isn’t just a posture. He and Mark come across with having some good thinking on the subject. It’s surprising that they don’t have more takers, especially among test automation thought leaders.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s