In my most recent blog Value through simplicity I mentioned how it is possible to take small steps to influence quality and/or reduce complexity. In an “ideal world” you would be able to propose trying an idea and then run the experiment and feel safe the entire journey. Not all employers are an “ideal world” so sometimes you need to do things “on the quiet”. I have thought occasionally about blogging on some “cheats” I used across my testing journey. So this is a blog about some of those. It’s possibly also a story about impatience, single mindedness or tunnel vision (or all of those and more).
I mentioned in Value through simplicity that in my very early days as a tester I realised the value of communicating early with both the business analyst and developers involved on a project (communicating with our customers would have been nice but, for most people in this company, they were off limits). This was, as far as I can remember, the first time I moved away from established test team processes and started questioning a bunch of “(best) practices”.
I’d been in testing for maybe 2 years and was getting to the point where the mandate to create detailed test requirements (with traceability to the specification) was driving me nuts. This is the only time in my testing journey that I almost walked away from testing. I didn’t become a tester to spend so much time on documentation. Testing is a big jigsaw puzzle and you don’t solve jigsaw puzzles by writing about possibilities. You solve them by interacting, being hands on.
On top of writing detailed test cases they then had to be maintained. When the specification changed (and it always did, multiple times) the test cases had to be maintained along with the linkages to the requirements. I can still remember the damned formula used for how testing effort would be split for a project. The total test time estimation was based on the number of test requirements gathered and their perceived complexity. The total testing time was then split 40% test case writing, 20% data set up and 40% execution of test cases. Years later and I’m still astonished. The same amount of time writing as executing. That’s badly broken.
My response was to cheat. I did the initial mapping to requirements but I wrote my requirements at a higher level. That meant less writing (weirdly, looking back, they were actually tracking more toward a test idea than the test requirements we were supposed to be writing). I wrote detailed test cases but with less detail. I knew more detail didn’t help. I’d actually experimented with writing test cases that met the “anybody can execute them” standard. After I wrote the first one I was asked 3 weeks later to return to the project and execute the testing. I was back to square one. Stuff that had made a lot of sense, in context, in the midst of the project, was now lost. Most of that additional detail related to knowledge that was, for want of a better phrase “in the moment” and 3 weeks is a long break from the “moment”.
When it came to changes (the inevitable specification changes) I no longer updated the test cases, I simply dumped the maintenance. I made notes about the changes and where they would impact. When it came time to execute the tests and record outcomes I made brief notes as to what was actually executed, why it was different to the written instruction set and the actual result based on the changes. I was a lot happier with a reduced overhead on what I thought was reasonably pointless documentation and I spent more time playing with the software, learning and discovering. That nobody ever questioned my different approach says a lot about the value those documents provided to stakeholders (hint – there was no value).
It wasn’t too much further down the road that I decided that once my test cases were written I would make them available for the developers to review. I was a bit nervous about this because, back then, bug numbers had a relationship to being viewed as a good tester (sadly I hear that in some companies this is still a thing). I was hoping they would have a look at what I had written and we might find variations in what we thought the software was supposed to do. I liked working with many of the developers as we would pick through code (I learnt a hell of a lot about COBOL and reading it through what we would now call “pairing”).
The above wasn’t exactly a runaway success. It had a few wins. None of the other testers were interested in giving it a try (it was still a bit “developer v tester” for many). One day a developer came over and informed me that he had run all my test cases on the project he had coded, thus the code would come to me clean. I’d never seriously considered this would happen. There was a moment (maybe a little longer) of “what the hell do I do now?”. Rerunning tests that had already been executed seemed a bit pointless but……. I might have to pretend, go through the motions.
And that is how I started to learn about exploring over scripting. It dawned on me that not all that long ago, through a chat with a colleague, I had realised that a lot of more my more important defect discoveries were often found “off script”. They came from ideas I generated while testing, things that I saw or learnt while interacting with the software. Ideas that tend not to spring out from documentation alone. So my plan was to execute a few of the key tests and then develop themes around those and explore those ideas (and of course, back filling the test cases with information so it looked “legit”). As it turned out the first few key tests that I decided to run demonstrated a myriad of horrible problems and placed a hold on testing for a week or so (which prompted a polite query about which project test cases the developer had executed). When I got back to retesting the project I selectively chose a few additional tests from my test cases to execute (beyond those I’d initially chosen). Once I completed executing those checks I then explored related themes and test ideas. Things became progressively more exploratory for me.
One of the benefits of being at the same company for (quite) a while is that you can get away with things on “trust” (although in the environment I was in that might not be the right word. Perhaps “tolerated” is closer). I had a number of practices that were considered “not for the rest of the testing team” (go figure because my ability to test well was never called into question – so maybe I had things that could be shared). Of all the things I could get away with the company still had a policy that required test cases (even during its moments of “agility”). So I created a process where I would capture test ideas. No formal mapping back specification sections, no “test requirements” in the way our other testers were writing them. The test cases would evolve as I executed testing. I would add, delete and modify my test requirements as I learned more while testing. So to keep everything “cool in the hen house” my test notes were captured in the form of completed test cases. I wasn’t ecstatic about this but I could live with it and it was at least in the neighbourhood of how I wanted to test. Much of this went unnoticed until……. new test manger.
Which could have been a disaster but I knew Mr Test Manager from a previous work place where we had socialised and discussed the meaning of testing (in case you are wondering, neither of us though it was “42). Interestingly we had vastly different views of testing. Mr Test Manager was all about numbers and predictability but valued that I thought differently to him and would critically analyse thoughts and ideas rather than just being a yes/no/no comment person. One day he approached me and said “I’m testing my test predictor spreadsheet, I need to know how many test cases you have for this project”. I looked at him and said “no idea”. I got a quizzical look (or at least that’s my interpretation of it) and “you’re testing, you must know”. I followed up “if you want to count test cases then come ask me when the project is put to bed”. A meeting and explanation, complete with rationale, followed. Fortunately all was cool but I was told to keep my process to myself (I guess to avoid an uprising or anarchy) and I was never again asked for a test case count.
This has been an interesting blog to write. I can see pieces of my testing philosophy being stitched together over a number of years. I’m lucky in that everything I do now is “above board” and I can openly discuss and experiment with ideas. That’s a privilege that I hope to never take for granted. I suspect though that having to do some stuff using “stealth” made me think a lot about risk and reward, probably useful lessons.
One thought on “Testing in stealth mode”