I was recently asked if I could talk to some of my work colleagues about bugs. More specifically I was asked if i could explain what a good bug report looks like and what information it might contain. The people in focus here are the Halaxy Service team. I admire people who work in customer support, it can be a tough gig, and there is a fair amount of pressure when you get difficult customers or situations. I’ve been in this role (with another company), it can be demanding. For all that, the Halaxy Service people are something special. They have a rapport with Halaxy customers that is beyond anything else I’ve seen or heard.
I had a 10 to 15 minute slot for my talk. Not a lot of time to really dig in and the people I was talking to were not specialist testers. My goal was to deliver some key messages in easy to understand terms that could be taken away and used. I decided that going over the difference between fault, error and failure wouldn’t form part of the discussion. Similarly, heuristics as a concept, were not explained but I did briefly talk about consistency as a consideration. What we wanted to get to was a place where Service team people could raise bugs with sufficient detail to establish context and allow further exploration and solutioning by others in development (this includes all in the development team not just developers).
My framework became 3 key considerations when logging a bug based on discussion with customers. Three simple questions to consider when logging bug details. What happened,? How did it happen? What should have happened? Let’s consider each of these in turn.
What happened – in short what went wrong? Something happened that our customer didn’t desire. This is a good time to capture if the customer has ever before attempted the function that “misfired” or if it is a first time. At this point we know that something perceived as undesirable has impacted a customer and we have a picture of the what but that’s not enough. Like a car crash we could simply say “the car hit the tree”. The problem with this is too little information about how it happened and preventing future occurrences.
How did it happen – in this part of the process we really want to get an appreciation of what was going on when outcomes went in an undesired direction. Browser information is useful (especially if it’s an older browser). We could ask how they were interacting with the software, the specific data or anything they can recall about what they have done to this point. There’s a lot of information that could be relevant here depending on the context of problem. The “how” is the information that is going to help us see the “what”.
What should have happened – It’s helpful to know not only what the problem is but why our customer believes it is a problem, this has several purposes. Firstly it gives us an insight into what our customer desires. This could be as simple as “I’d like it to run like it did when I did the same thing two days ago”. It could also be a discussion about “I want “X” and I’m getting “Y”. In both examples whether the customers feedback is based on an unintended change to an outcome or a perceived one (our customer is mistaken or supporting customer documentation is ambiguous) we have an insight into how our customer is viewing one part of our system. This is important for investigation and solution purposes as well as helping to manage customer expectations should we need to later explain why differences between “desired and actual” represents how the functionality should execute at a business level.
On reflection, after my short presentation, it occurred to me that I could have included a fourth point – What’s the impact? This is useful information to help us determine how quickly we want to deal with this issue and how we deal with it. I know that when something with serious impact comes through our Service team it gets communicated quickly and the development team (again this includes the testers) swarm around the problem and potential solutions. However, it’s useful to capture the business impact as part of the bug detail regardless of whether the impact is large, small or somewhere in between.
So, that’s my story. No big technical terms, no diving into a glossary and insisting on specific terms but, hopefully, keeping it relevant and useful through simplicity. It hasn’t been long enough since the event to see if my efforts have helped people or how I could have been more effective. However, I thought this small story, about keeping communication simple, was worth sharing. This is my simple story.
Thank you to Janet Gregory and Lee Hawkins for their review of this blog and feedback.
2 thoughts on “Bug Reporting can be simple”
Thhanks for writing