Considerations to advance compliance maturity through a digital validation shift
Kick-start software quality and compliance across the enterprise, and learn the most important elements to include on your roadmap for CSV modernization.
Testing isn’t about writing test cases. Testers aim to make sense of the product status in ways that help clients make decisions about it. This requires you to systematically gather evidence about the product. Companies like Tricentis provide tools to help you do that. And I’m helping them look for better ways to do it—to get closer to the essence of testing and make better companions for talented testers.
I became a tester (at Apple, in 1987) because I liked complaining and I liked discovering things. It was more social than coding was, with more variety in the schedule. At least, that’s always been my story. But as the years have passed, another factor has come into focus that I now think mattered more than the others: I felt needed.
I was hired as a test manager, not having had any previous experience as a manager or tester. This proves that a high school dropout can achieve anything if he never lets go of his dreams, and also that our industry is a shambles that will let anyone have a go. In my first days on the job, I asked about testing methodology and culture. The answers were shrugs or vague hyperbolic slogans (“Testers break stuff!”, “We stand for quality!”). Testers at Apple were offered no training. Nor were there any company-specific guides or manuals. Since no one else had already figured out how to be a testing craftsman, I decided to take on that job myself.
Apple had a well-stocked corporate library, with several books about testing. They also had an academic journal article delivery service. I soon amassed a couple hundred papers related to testing. But my reading led to frustration. Computer science people wrote about testing as if it were a math puzzle. To cite an example of one sweeping approach:
“Automatic test case generation can be invoked once a set of evaluation criteria has been defined. Depending on the strength (and cost) selection criteria specified by the user, the generator produces a set of test cases that meet the evaluation criteria within the indicated cost parameters.” (DeMillo 1991a)
Oh? How does the generator work?
“This process is carried out by modeling the evaluation criteria as systems of algebraic constraints and applying increasingly sophisticated algebraic system solvers until either the criteria are satisfied or the cost parameters are exceeded.” (DeMillo 1991b)
Uh huh. You can go read the paper if you want. I did. To me, it does not look helpful for testing on real projects — nor have I heard about anyone in the industry who uses it.
Articles by the people actually working in the industry and creating products for customers — who, with few exceptions, seemed not to read Computer Science papers — were not much more helpful. They generally wrote as if testing were a matter of observing certain forms and rituals taught at Miss Minchin’s Boarding School for Testers. Follow the IEEE 829 standard, children! Annex C, Section 5.4.3.(2) says “Generate system test design.” But how? Well, read on to section 5.4.3.(2).b, which says “Verify that the test design complies with this standard in purpose, format, and content.” That’s all there is to it! (Like, literally. There is no further content on that topic in the standard.)
The forms and standards didn’t work for us. My team in the Development Systems Group did a study of fourteen projects in our department. In our report from 1989 we wrote, “In 11 of 14 test suites for which new tests are being developed or old tests are being updated, a test plan is not being used for guidance.” Instead, testers were improvising the testing each day, in consultation with the team.
At the time, I assumed that Apple (then later, all of Silicon Valley) was operating in a special context where the forms and rituals written in the textbooks did not apply. After decades of being a traveling consultant, I learned that the prevailing advice in the textbooks of the eighties and nineties doesn’t apply anywhere. It was basically Frederick Taylor fan fiction.
There were several key moments in the “how I found testing” story, but today I want to tell you about the great “Aha!” of 1995. It came when I was creating my first testing class. I had made slides for the preliminary generalities of testing. I had a slide about unit testing vs. integration vs. system testing, for instance. Every testing class at the time had such slides.
Then I came to the part of the class where I had to tell students how to actually test. What does one do to design tests? What is that process, precisely?
I had nothing to say!
I had been in the testing world for 8 years by that point, so this was a bit horrifying. As I analyzed my difficulty, I realized the root of the problem: I had been a test manager, not a tester, for most of my career. Test managers do a lot of things, and we work with the concepts and forms of testing. We do not, however, spend a lot of time looking for bugs ourselves.
In that moment, I was presented with a choice. A tempting thought emerged from the darker coils of my mind: copy folklore about test design from some testing textbook. That was (and is) a popular practice. I’m so glad I didn’t go that way. Instead, in what became a defining moment of my career, I went into the test lab, as a tester, and paid careful attention to my own methods as they emerged from my spontaneous practice. I was adopting what I now know to be a social science approach (ethnomethodology, more or less).
Therefore, I tested and tested and tested. I watched myself test. I wrote down as much as I could about each action and decision and mistake that I made. I stopped myself repeatedly to ask, “How did I know to do THIS instead of THAT?” I also watched other testers.
Over the course of a couple of months, I came to discover, for instance, that all deliberate testing seems to be based on mental models. All test techniques are organized around covering the product according to some model. The process of testing is mostly the process of exploring the product, forming theories about it, and gathering evidence to corroborate or refute those theories. Testing is almost exactly the process of being a scientist.
I was able to return to my class notes and create an original approach to teaching testing. I also threw away that material about “integration testing” vs. “system testing.” I replaced it with a simple dictum: if it exists, test it. Do you have units? Test them. Sub-systems? Test them. You don’t need to learn integration testing. You need to learn to test and then you can apply that to an integrated system as needed.
I can now tell you why testing is so hard to pin down: it’s a tacit form of knowledge. Testing skill does not live in words and pictures, but rather in our embodied humanity. Like walking and talking, we do not learn the basics of testing through explicit instruction. We are, in a sense, born testers (Schulz 2015, Schulz 2007), yet we can develop sophistication as skilled critical thinkers through training and deliberate practice (Chow 2015, Lehtinen 2017). And, of course, that applies to testing.
I did find books that helped me understand testing, but they were not “testing books.” They were books about organizational learning, systems thinking, critical thinking, and the nature of scientific thought and practice.
Your job, as a tester, is not to “write test cases.” Your job is to make sense of the status of the product in ways that serve your clients’ need to make decisions (such as release decisions) about it. This requires you to systematically gather evidence about the product. Companies like Tricentis provide tools to help you do that.
My job, at Tricentis, is to look for better ways to do it. I want us to get closer to the essence of testing; to enable testers to gather richer forms of evidence that might not be capturable in standardized ways; to create tools that make better companions for talented testers.
I feel needed.
James Bach is a consulting software tester and Technical Fellow at Tricentis. He is also the founder and CEO of Satisfice, Inc., a software testing consultancy. James has been in the tech field as developer, tester, test manager, and consultant for 38 years. He is a founder of the Context-Driven school of testing, a charter member of the Association for Software Testing, the creator of Rapid Software Testing methodology and Session-based Test Management. He is also the author of two books: Lessons Learned in Software Testing and Secrets of a Buccaneer-Scholar: How Self-Education and the Pursuit of Passion Can Lead to a Lifetime of Success. For more about his work and online courses see https://www.satisfice.com/.