“Make sure that you’re there as a service to the product, and not as an obstacle or somebody who’s waiting to have stuff delivered to them.”
– Michael Bolton
Most members of the software testing community are familiar with Michael Bolton. Whether through his Twitter account, his Developsense blog, or his lengthy list of speaking engagements, Michael’s message around the importance of understanding what software testing is—and what it isn’t—has been shared in every medium imaginable over the last two decades.
We recently invited Michael to sit down for an episode of our Continuous Testing Live podcast, where Ingo Philipp and I took the opportunity to ask Michael for his thoughts on a number of questions we had, including:
- Why does he propose that “all testing is exploratory?”
- Who should testers consult with to learn more information about a release candidate?
- What sorts of external influences can help diversify a tester’s skillet?
- What does it mean to “successfully stumble” into new findings when exploring a product?
- What should a sense of confusion tell a tester, and how can they use confusion to their advantage?
Without further adieu, we hope you enjoy this week’s episode of Continuous Testing Live.
Noel: One thing you brought up this week, Michael, were some of the things that testers often view as blockers to their ability to test. This was something that I wrote a lot about in my last job. We talked about having test environments ready and built correctly so you can “begin” your testing, and we always talked about how, when you don’t have those environments testers are just sitting idly by. But you showed a whole list of other types of testing that testers can do even if the system keeps crashing, or they don’t have the environment they need, or a whole list of things that testers have experienced, and then said, “Well, I can’t test right now.” Could you share some of those other testing activities that testers can be doing instead of waiting around?
Michael: Yeah, sure. The problem starts, of course, with a kind of limited view of what testing “is.” A lot of the time, testing is seen as direct interaction with the product; banging on keys, looking at outputs and outcomes, so on and so forth. But testing is so much more than that. There are so many other activities that are involved in testing because testing is, as we say, evaluating a product by learning about it through exploration and experimentation, which includes a lot of stuff. Questioning, modeling, studying, negotiating for resources, developing tools, performing research, gathering information, there’s an enormous list of things that we could do.
At the same time, we could also pull on the thread labeled “product” and recognize that a product is anything that anybody has produced, by definition that’s what a product is, something somebody has produced. So if it turns out that it’s Wednesday and those rotten developers were supposed to have something done on Wednesday—poor developers, they’re working on the same kind of premises that the testing are—they don’t know what they’re going to run into, they don’t know what they’re going to encounter as they try to develop something. So, it’s common for programmers to make discoveries, and stumble on obstacles, and to be late cause stuff is never quite as simple as we want, so we have to be sympathetic to the developers.
But, while we’re waiting, there are other things that we could study. There are other things that we could experiment with. There are other tasks that we could perform. If you ask a tester, “So, you just sitting there waiting for stuff to show up? You’ve got nothing else to do?” Most testers will tell you pretty quickly that there’s lots of other stuff that they could do. So, we’re never exactly blocked on the whole project. We might be blocked on a particular thread. If there’s something I anticipate that I’m going to be asked to test in the sense that I was talking about, the sort of old-fashioned notion of testing as banging on keys, there’s still lots of stuff that I can do to prepare for the moment when that shows up.
Ingo: I mean, for me, the killer argument, and you stressed this, is that testing is “closing the gap between what we know and what we don’t know.” And you’re right, just banging on keys and having product in front of you is not the only way to try to close the gap. So, concrete examples, I would say, is also go out and talk to the developers. Go out and try to talk to the product owners to learn about it.
Michael: Offer them help. Right?
Michael: That’s a huge thing. A lot of the time when a programmer’s stuck on something, some research could be helpful, so offer your help for that. Make sure that you’re there as a service to the product and not as an obstacle or somebody who’s waiting to have stuff delivered to them.
Ingo: Yeah, sure.
And I think what testers often need, that’s my personal perspective, is also trying to understand the business perspective behind it. The reason why we built the product, especially in an Agile environment when it is set up properly, there is a product owner who should know why we built the product. What’s the ultimate purpose of doing that? I think that dramatically helps us to improve the way you approach the product by testing.
Michael: Yeah. It also really helps, I think, to diversify the ways you gather that kind of information. For instance, as somebody who was working on a memory management and multitasking software in the 1990s, it was fascinating to me, that our best developers hung out online, on CompuServe. CompuServe is what we used to call the internet by the way.
Michael: CompuServe was the place where we did a lot of online support, and our developers hung out there. It was really funny because I was at a trade show one time and the early 90s, and we’re just talking to customers as a person who, at the time, was in tech support. They said, “Yeah, we see a lot of helpful stuff on tech support. Stafford Williamson…” who was the tech support person responsible for monitoring it, “…he’s good. Dan Spears seems to really know his stuff,” Dan Spears wrote the damn program, right?
Of course, the customer really didn’t know that, so that was a developer participating in that way. Testers can absolutely do that and they can learn an enormous amount by doing it, plus they can find bugs in things like tech notes and support information. Testers can find bugs there too. So we don’t feel like we have a lot of time on our hands, but sometimes I feel like we can learn stuff more quickly if we diversified our approaches to learning about it.
You mentioned tech support? There’s also marketing out there, the whole marketing material, you can test that. You can go to the presales to the sales to see how they pitch the product at the end of the day. Maybe you can learn something from it. That’s my point.
Michael: I was very impressed with the participation of marketing people in this class this week. You guys have done great. You brought a lot of value and a lot of insight into the exercises and to the conversations and to the approaches we’re using to solve problems.
Noel: So, another piece of advice you gave this week that really hit home for me here at Tricentis is this message that I know I’ve preached about our own tool, and you have as well, Ingo. We always say that it frees up more time for exploratory testing, or as you say, “testing,” since all testing is exploratory. And, I think I took away that you advised that that differentiator selling point is fine as long as you’re not just talking about doing testing or exploratory testing at the end of a release or development cycle. And, that it’s very valuable to do that kind of testing in the beginning—or at any stage possible. I took away that you need to make sure you’re not just freeing up time for exploratory testing at the end of a release cycle. You need to make sure you’ve tested far earlier as well.
Michael: Yeah, I guess we could do some substitutions in what you just said and it would be entirely consistent with the point you’re making, which is, your tool frees up time for learning. Right?
Your tool, being a tool, being a medium, as McCullen would say, extends, enhances, accelerates, intensify, enables learning something about the product. Well, even thinking about using the tool does that, too.
We did not, as explicitly as we sometimes do, go into a deep dive about strategy. That was a thread that kind of ran like a baseline through the entire class this week. We didn’t sort of single that out this particular time. But, even thinking about how you’re going to approach something, how you’re going to use a tool, how the tool might change the way you think about it, about what you’re testing, that’s learning, too. And that can happen and does happen at any moment in the process of development.
So, I think to a certain degree, the way people think about testing is far too focused on the evaluation and not enough about the learning. They’re of a piece, they feed off each other. All the way through the project we’re learning stuff and even if we’re thinking about how we might do something or we might use something, how we plan to use something, how we can foresee it, anticipate it—that’s an exploratory process and that’s happening all the time. Explorers when they get on a ship, they’re still exploring even when they’re having dinner.
Ingo: So, what freaks me out is the question itself. That people even ask the question “Ah, we should be explorative testing after we’ve done, or we’ve created our test cases?” It happens all the time. I really like the way Cem Kaner puts it, how he sees testing. And please correct me if I’m wrong, but he once stated that “ A test is nothing more than a question you ask of the program, and the purpose of running a test is to gain information.” So, the purpose of testing for me is to search for information. And, I think you have to search for information before you’re able to put that knowledge, what you have learned into a test case.
Regarding the tools, and I really love the separation you have made for about ten years into the testing community, is the separation between testing and checking. What I see tools are doing and how they help us in our daily work is to manage those artifacts. To manage the checks. They don’t do the testing for you. They can help you and support you in doing that, but they don’t do it for you. It’s more a mechanical work. So, a tool helps me manage all those artifacts, which we call test cases, requirements, how the mapping is established, and stuff like that. So therefor I really like the separation you introduced.
Michael: Well, if I’m remembered for anything, I kind of hope it’s for identifying that distinction. In a way, I guess it’s kind of like distinguishing between forests and trees. Or, my current favorite is “biting and eating.” But, there’s a distinction between “toasting and cooking,” too. You, Noel, are a cook, and toasting is part of cooking. Toasting is something that can be handled by a machine. Toasting is something that can be done algorithmically. I put this thing in the toaster, go away, come back. But, deciding what kind of bread you’re going to put in the toaster, deciding whether you want it to be a rich dark brown, golden brown, or just barely… that’s all stuff that goes into the mind of the chef.
So, those kinds of distinctions, I think, are really important. Specifically, to help us understand what we’re talking about, and to understand the distinctions between what machines can do powerfully, helpfully, but that humans remain in charge. Humans use tools. If you are an incompetent cook the toaster is not going to help you a whole lot.
Another thing that comes from our reading of McLuhan, I think about the ways tools are used, also from Harry Collins, is the recognition that tools do, as I said earlier, extend, enhance, accelerate, intensify, enable. But they’re agnostic about our own capabilities. They do those things, they extend human capability in some way. But if we’re bad testers, what tools can help us to do bad testing even faster and even worse than we’ve done it before. If we’re glorious testers then our testing will be.. we’ll do more glorious testing more quickly.
So, that’s a hugely important thing to me, and there were certain problems that cropped up when I started talking about it. I didn’t get it perfect at the very beginning. It took quite a few years for us to get it right, but by that time, the damage is already done. People who designed checks, executed checks, focused on doing the checking, were terribly offended by this. They took from it a suggestion—first of all, when I say “testing versus checking,” they took from that a suggestion that the two were different. And it’s not entirely clear to me the degree to which I comprehended this from the start, but in the long run, I didn’t mean it like testing versus checking, Reall Madrid versus Manchester United. They’re not at odds with each other, they’re inside each other, like forests versus trees. But, a lot of people took the “versus” part as being “in opposition” which is certainly not what we mean now, and we probably didn’t mean it then. But, it was easy for everybody to get confused because, ideas like this take time to bake as it were.
There’s another element to this which is a lot of people who are involved in testing, I think, are fascinated by programing. Some of them take great and justifiable pride in their programing skills, and we’re not dismissing that, far from it. Another comparison that we use is “compiling versus programing.” What the compiler does is absolutely automatable, it’s what a compiler is. It’s something that takes human source code and translates it into machine code. The compiler writers are among the most sophisticated and skilled programmers there are—some of them. And that’s true with people who program checks, and who are creating checking frameworks, and who are creating tools that support checking. We’re not dishonoring those people at all. What we’re saying is the exact equivalent of saying “there’s more to programing then just compiling.” Well, that seems to me entirely uncontroversial, nobody would disagree with that who knew the least little big about programing or compiling for that matter.
So, we’re not dishonoring the role of checking by any means, we’re just saying that there’s way more to testing than that, so we want to keep the skills of the tester central to the discussion and not what the machinery is doing.
Ingo: I think the problem isn’t why your language, per say, is not being adopted, the distinction between checking and testing. Although I fully agree, and I fully buy into that, it’s that testers don’t live in that world. They don’t speak that language in their daily business. Because, you know, when you’re in a project, people talk about manual testing, people talk about automated testing, they don’t talk about human checking, machine checking in testing. I think most of the testers know there is a distinction out there but they simply can’t speak that language, and so it gets forgotten, I would say.
Michael: I think that is probably true. You know, it absolutely helps to have that stuff around, and they kind of know what they mean. But, if you look at the history of medicine…it used to be that they would say “He’s possessed by a demon.” And people would invoke magic and Gods and so forth when they felt like they didn’t have control over something. It’s a really interesting lesson from anthropology. Well, as time came by, we realized that there were natural explanations, not supernatural, but natural explanations for disease.
What was interesting was, for a long time, people knew that it was safer to drink water that had a little bit of wine in it. They didn’t know why, they just made this sort of empirical observation that people who poured a little bit of wine into their water didn’t get sick the way other people did. And it’s interesting because we think of science as leading discovery, it’s the leading edge of discovery, but so much of the time science actually trails what people have observed empirically.
And, I think testing and checking is a little bit like that, I mean the distinction between testing and checking is a little bit like that. It took us, what, fifty years, roughly, of writing about testing, of thinking about testing, of talking about testing, to notice the significance of something that people were say much earlier. Dan McCracken, in a 1957 book on computer programming refers to “desk checking.” Now, I wasn’t aware of that when I cast my distinction out there in 2009. Go back a little further you can find Alan Turing talking about ways programs must be checked.
So, it’s really funny to recognize that a lot of this stuff, all agile development, seems to me, to be a rediscovery of things people knew pretty well in the fifties and the early sixties. Jerry Weinburg will tell you right away that what they were doing back then looked a lot like agile development. In the early nineties, in our company, we were moving very quickly, we were doing very rapid experimentation, and our programmers were working in close collaboration. The ones who were really good, were, for all intents and purposes, doing pair programming all the time and it was this really intensive form of review. Programmers were sitting right next to each other, looking at each other’s screens, and critiquing the work as it came up. That was fully ten years before the agile manifesto came out.
So much of what we’re doing is a matter of rediscovering things, and then, having rediscovered them, trying to name them in ways that allow us to make sharper distinctions that we can then use to recognize important things—but that takes years. And it takes exposure to ideas from outside and a lot of testers really do have their heads down for perfectly reasonable reasons. They’re busy, they’ve got lots of stuff to do.
That said, I do wish more people would pop their heads up a little bit more and think of two kinds of wider worlds. One is that there’s a huge worldwide testing community and it always blows my mind how few testers know about it. And the other dimension is, there are so many things that are useful to a testing mindset that are outside the field of computer programing and software engineering and software development altogether and that’s where many of the best ideas come from, is outside a discipline.
Noel: You’re talking about discovering things empirically, and there was an example of that in our class this week. I was working alongside a tester and we had an assignment that you’d given us where we had to test this calendar entry system and we had a pretty specific couple of tasks that we were supposed to test for with some assumption of “These things will work fine, we just need to confirm that.”
He was testing something that was in the test cases he was “supposed” to be performing, and he accidentally hit a wrong key. He meant to hit, an “8” but hit a “4.” This test was supposed to pass, but since he accidentally hit a 4, and then “enter,” it brought up an error that was completely unexpected. We weren’t even testing for that, because we had already been told “this works fine, just test for these kinds of things,” and it was by that accident that we discovered something that we hadn’t intended to discover. We didn’t think we were going to test those numbers, we had no plan… there wasn’t’ even a plan of “Let’s try different numbers.” It was an accident. It wasn’t even something he was “smart” enough to try and test for, it was purely accidental. You brought up “the difference between intentional probing and successful stumbling,” and I saw this as a perfect example of successful stumbling.
Michael: That’s exactly what I was going to point you to, and I’ve got a blog post about that, but there’s a wonderful passage in a book from two social science researchers, and the book comes with the catchy title, Reliability and Validity in Qualitative Research. Now, it’s weird because I don’t see that book at the airport, you think it should be flying off the shelves right?
But let me just bring that up because it’s so terrific. It’s such a wonderful thing, which is also about, by the way, measurement and metrics.
This is from Reliability and Validity in Qualitative Research, by Kirk and Miller. Here’s what they say. They say, “Most of the technology of confirmatory, non-qualitative research in both the social and natural sciences…” and, I would add testing by the way, “…is aimed at preventing discovery. When confirmatory research…” testing, “…goes smoothly, everything precisely as expected. Received theory is supported by one more example of its usefulness and requires no change. As in everyday social life, confirmation is exactly the absence of insight. In science, as in life, dramatic new discoveries must almost by definition be accidental or serendipitous. Indeed, they occur only in consequence of some mistake.”
That’s exactly what I think you’re talking about Noel, and that’s been my experience, too. That’s why I sort of coined this little phrase “successful stumbling.” That’s what we’re doing an awful lot of in testing and I think it’s important to remain open to opportunities to stumble successfully.
Noel: Something else I wanted to ask, I think my favorite line of yours this week, and I have fourteen pages of notes that I took, but I think my favorite line was around “confusion.”
You get a new piece of software, a new product, a new feature, and you don’t know where to start. You didn’t get great requirements or a mission, and, as a tester, there may be confusion, and that confusion can lead you to feeling uncomfortable. You said that confusion can definitely feel uncomfortable, and that that is not only normal, but that to be a good tester you need the ability to be okay with confusion, and then understand that it can be a really powerful tool for you. And that probably requires some convincing for some, because not everyone is comfortable being uncomfortable, or comfortable being confused. It creates either panic, or fear, or worry, or that sort of thing, rather than “Great! I’m confused! What an awesome opportunity for me to do something really great here.
Michael: Yeah, James and John Bach are both figures in a wonderful story. John was being trained by James to be a tester, and John had not been a tester before, he’d been a journalist. James hired him—at least in part—because James believed that John’s writing skill and his ability to construct a story were really valuable, but John hadn’t been involved in, or immersed in, technical stuff before. The way I remember John telling this story is, he came into James’ office one day after a couple of weeks of training and he just slumped down in a comfy chair and he said, “I don’t know.” And James said, “What?” John says, “I don’t know.” James says, “I don’t know ‘what’, John? What are you talking about?” John says, “I don’t know if this tester thing is going to work out.” And James says, “Wait, what are you talking about? You’re doing great, I think. What’s the problem?” And John says, “Well you say that, but you gave me this piece of software to test and I’m just so confused.” And James says, “When you’re confused, that’s a sign there’s something confusing going on.”
The confusion maybe about you, for sure, but you may have a confusing product on your hands. The confusion is a signal to you. Your emotions and your feelings are signals that give you hints about the world. Now, from my own perspective, I observe that feeling don’t come with return addresses on the envelope. They don’t tell you where they’re coming from; we have to work that out. We have to work out, “Why am I confused? What am I confused about? How might I resolve that confusion?”
To have the confidence and the self-assurance that confusion will go away, and the recognition that confusion is a hint that there’s something to be learned, there’s something to be investigated, there’s something to be resolved, that’s really powerful. I play music for fun. Playing traditional Irish music is my principle hobby. You hear this in just about all music, but there’s a alternation between suspense and resolution, suspense and resolution. Pieces of music are most exciting when you think they’re going somewhere, and you’re trying to anticipate where they’re going. And then the composer of the piece ends up resolving it in some way that gets back down to the root of the core of the time.
So that sort of tension makes music exciting. It makes learning exciting. It makes software testing exciting, and it makes software engineering exciting because we’ve got a problem to solve and we’re not quite sure how we’re going to solve it. But, let’s give it a go. The fact that we’re momentarily unhappy with the state of things is actually how humans make progress. Completely happy people aren’t going to do anything.
Michael: They’re just going to sit there and be happy. Well, that’s great for them, but every innovation that we come up with, every solution to a problem starts with there being a problem. It starts with it not being quite what we want, and that set of ideas come, for me at least, from a book I read in English class in grade thirteen called The Educated Imagination, by Northrop Frye, a Canadian literary critic. It’s a set of lectures that he did for a radio audience in the 1960’s on why we have literature. It’s the same damn thing. It’s the idea that “We’re in one place and we can see another place.” That’s what imagination and and that metaphor does for us—to take us from one place to another place.
So, it might be interesting to notice by the way how, in this conversation, I’ve cited anthropology, history, music, literature, and medicine, and probably a bunch of other stuff that I’ve forgotten. That for me, that kind of interdisciplinary approach, the recognition that there are consistent patterns in all these fields that we can take advantage of in wherever we’re up to, including testing—that makes things very exciting for me. I wish that for other people, too. Go out and engage with the world and bring back whatever you find to our little craft.