Note: This is another blog that has been sitting in my drafts folder for well over 12 months. I honestly don’t know why, maybe I just forgot it was there. I’m publishing this in the “as I found it” state with the exception of a couple of grammatical changes. I can still remember the people and interactions that prompted me to write this blog. I hope you find something useful in my writing.
It is a source of wonder to me that humans can attach a whole variety of meanings to words or concepts. In ways it is a beautiful attribute of being human. At times it is quite a journey when you swap questions and answers then realise what you thought was a common reference, isn’t. I like these moments, occasions when you know that potential misunderstandings have been avoided through establishing deep, rather than shallow, understanding. I don’t have statistics to back me on this but I’d wager that the majority of our software disappointments fall into the shallow understanding “bucket”. Conversations that we thought were precise and clear but where we were actually talking past one another. I’ve heard plenty of this, I’m sure I’m not alone. Occasionally I get into trouble for focusing on words (people can get a tad impatient with me). People who work with me for a while get to understand why (I’m always happy to explain). Often I’m not the only one querying the word, phrase or statement. I just happen to be the one who will initiate chasing clarity sooner rather than later. Experience is a great teacher.
The common reference I have in mind for this blog is Product Backlog refinement (PBR) or Grooming.
Product Backlog refinement is the act of adding detail, estimates, and order to items in the Product Backlog. This is an ongoing process in which the Product Owner and the Development Team collaborate on the details of Product Backlog items
Click to access 2017-Scrum-Guide-US.pdf
I’m somewhat surprised by the number of chats I have around PBR that, when it comes to the role of the tester, and the value of PBR sessions, includes the notion that testers should walk out of these sessions knowing exactly what they are going to test. That any gaps or problems should be identified in this session. I struggle with this idea for a number of reasons.
- It doesn’t align with the PBR description in the Scrum guide
- It doesn’t align with any PBR, or grooming, description I have read
- It doesn’t align with the idea that we learn as we go
- It doesn’t align with the idea that user stories are a starting point, a placeholder for conversations
- It places responsibility on dedicated testers to find “gaps” and assigns a quasi “Quality Police” tag to them in what is a team responsibility
- It is about knowing everything upfront. That’s a waterfall mindset and antithetical to an agile based approach.
- It’s an unrealistic and unfair expectation
Personally I sometimes go into PBR sessions and encounter an area with which I have little knowledge. I contribute where I can, often it’s looking for ambiguities, clarifying terms or challenging assumptions (you don’t need deep understanding of an area to pick up on assumptions). I’ll also use this as a heads up for things I need to learn more about, investigations to be had (although I prefer to think of it as playtime and it is often a good way of finding existing bugs).
Some good questions to ask in this discussion:
- How will you test it?
- Why is it important?
- Who will use it?
- How will we know when it’s done?
- What assumptions are we making?
- What can we leave out?
- How could we break this into smaller pieces?
I borrowed the above from a Growing Agile article on grooming. I think they represent excellent questions in a grooming session. One thing I have found across teams I have worked with is that testing can be a “forgotten cousin” when it comes to getting stories ready for actual development. It’s not that the other people in the team can’t contribute in the testing space, or don’t want to, it’s simply not habit to do so. It’s a habit I like to cultivate in teams. It’s quite interesting how quickly team members jump on-board. In my previous blog I mentioned Mike Cohn’s conditions of satisfaction. I think they fit very nicely as a tactic within good PBR discussions.
My hope is that if you are reading this, and if you are a dedicated tester within a scrum team, you are not identifying with the demand to be “completely across” a story during PBR. If you do identify then it would be a good retrospective item to raise. It would be good for the team to understand the load this expectation places on you. It would be even better for the team to acknowledge that the attitude is counter productive and create external communications (ie to stakeholders outside the immediate team) accordingly. If you really want to kill off the “know it all in grooming” expectation, work with your team so that every grooming session has everyone thinking about and contributing to testing thoughts. Actively discuss testing and capture thoughts and ideas. Show that testing is being considered and considered deeply. It doesn’t show that you have “covered it all” (and nor should it) but it does show thought and commitment to each story. The reality is, you can defend that approach (if required) and, as a team, reduce unrealistic expectations. As a team you’ll also be far more aware of stories when testing is an embedded consideration in the PBR meetings.
As my final thought for this blog. In my opinion, and experience, there is a sure fire sign that the team is getting across joint ownership of testing. When you are sitting in a grooming session and others start asking questions about testing or testability before you, the dedicated tester, start asking, you are on the right track. Punch the air (even if it is only in your mind) and congratulate the team for their help. A better journey has started.
Cheers
Paul
What we’ve done with our testing may be unique, but might be useful for others.
The current Scrum program is for a public agency, subject to State and Federal regulations around health insurance. We have interfaces o CMS and IRS as well as State databases. As well it’s customer-facing – to the public. So “testing” is a critical success factor. You probably remember the debacle with the AC site? The core development team is the same firm.
So our approach is the separate the actual testing processes from the Scrum development processes while maintaining an overall “agile” paradigm. Working code from a Sprint is placed in a Kanban Queue. This code has gone through Unit Testing be Dev in accordance with the exit criteria defined in the Story in Gerkin.
Then the Testing staff starts with the Integration Testing process of taking that working code and assuring it, first of all, doesn’t break existing code that is being used in production. Then, of course, verifies the code does what it is supposed to do in the “System” as well as implement the Story and support the Feature, and then up to the Capability.
Any fall out from that process goes back to the PB as a defect.
The next tier of testing is assuring all the external processes – accesses to external systems at the State and Federal level – work as specified. Any fall out there goes to the PB as a Defect Story.
This way ALL work is captured as Stories, including defect Stories, so we have direct visibility to ALL work on the project.
Now to the point. PBR considers ALL work, development work and defect repair work. Testers and equivalent “developers” (since they’re writing test automation code) and sit in the PBR meeting alone with Dev and PO’s.
Thanks for the post as it’s confirmation Testing is “work” and at the same time, uniquely different than development.
LikeLike
Those questions are good questions to ask whether you are part of a scrum team or kanban team, or any team practicing agile and getting your stories ready to taking into the iteration. For myself, I call these story readiness workshops. The idea (for me) is to try to eliminate dependencies, to get a shared understanding of the story which includes the scope of the story (high level acceptance tests). I completely agree that we cannot know everything during this workshop. We likely won’t even know everything by the time we finish coding…. Learning, is so much a part of what we do.
A good post. Thank you.
LikeLike
Another great blog! Totally agree.
I prefer to take the view that a more appropriate question to ask in a refinement session is “Can we test this?”.
If you take it back to grass roots, no discipline is recognised within a Scrum team. We really should be asking whether we, as a team, believe a requirement can be tested (but seeing as the “How” we will do something isn’t decided until Sprint Planning it’s impossible to plan all testing before then).
It isn’t fair or realistic to place the responsibility onto an individual in the team. As a testing specialist you bring some great techniques and experience to the table but likewise, Business Analysts and Developers bring additional knowledge & perspective, as does the Product Owner.
Accountability *should* lie with the team so if fingers are being pointed then something has gone wrong!
LikeLike