There’s a balancing act. There are stakeholders that we serve. We serve the organization. We serve the project stakeholders and their decisions that they need to make. So if we’re not doing that, we might be a little off-kilter in where we’re angling.

– Dawn Haynes

Even with all of the various “flavors” of agile out there, plenty of testing organizations still aren’t actually doing agile in a traditional sense. But as we learned in a recent conversation with software testing expert Dawn Haynes, non-agile testers can still be very agile in their work, and in the ways they interact with customers and other stakeholders.

Dawn is our guest on this week’s episode of Continuous Testing Live, we invite you to listen to this insightful discussion, or to read the full transcript of our chat below.

Be the first to hear each episode of Continuous Testing Live by subscribing to the show on iTunesGoogle Play, or SoundCloud!

Noel:  So, your talk this morning was called “Being More Agile Without Doing Agile.” And one thing I always like to do is kind of a setting a base for everything, like defining certain words or understanding titles better is to ask the speakers what inspired the talk? What did you either see out there or not see out there that told you that this could be helpful information for people in your session?

Dawn Haynes, CEO & Testing Yogini at PerfTestPlus, Inc.

Dawn:  I think that’s a really good question because agile has been the flavor of the month for quite a long time. Initially, when the agile, I’d say, “marketing machine” got rolling, it just made me roll my eyes. It’s like “Oh! New lipstick on the pig,” of what some organizations were already doing, and organizations that I worked at were already doing.” I didn’t really understand what the ripple would be.

And so, it’s been over 10 years, where it’s been out there. People have been struggling with it. And, I teach a variety of classes, and I go to conferences and I speak, and I get the question over and over and over again: “How do we test in Agile?” I remember the first time I heard the question. I just kind of was confused. I’m like, “You do whatever you need to do,” right? What are they building? When are they building it? That’s what you test, right?

It was unclear to me what the ambiguity was. And I think as things have transitioned, we’ve gotten a little more structure, and there’s a lot more help for teams in times of transition. And we understand the space a little better. But even people who understand the ceremonies and understand the outcomes of agile are still struggling with letting go of what they envision as a tester’s “role.” And so, out of that type of question—and very persistent question—on the part of testers and test managers alike, I thought it was important to address the issue of this “agile mindset” because we’ve been talking about it for a long time but people aren’t shifting.

So what I wanted to draw out was, “What has people blocked?” What are the anchors that are keeping them in their ways of thinking and their habits of how they approach their work? And so, going through this process has made me think pretty deeply about it. And that’s how I put the presentation together.

Noel:  That’s cool. I speak with a number of people at conferences like these as well, and you find people who aren’t “doing” agile in an organizational way. But they are doing things that make them agile as individuals, or as testers—without any kind of agile, top-down initiative that they’ve been given.

One of the lines that you shared this morning, I was taking notes so fast that I didn’t get a chance to fully grasp it, but it was talking about “building tests that are more flexible as the pace of development accelerates.” With increased agile adoption and DevOps and all these things, all we hear about is accelerated release cycles and shorter windows of time for testing. So what makes tests flexible, easier to maintain, and more usable in those smaller windows?

“Here’s a true confession that will freak a lot of people out. I’ve never written a structured, step-by-step test case in my entire life. I’ve never needed it.”

Dawn:  So for me, in my career, I think back to what I’ve done in various projects, and here’s a true confession that will freak a lot of people out. I’ve never written a structured, step-by-step test case in my entire life. I’ve never needed it. No one ever showed me one. No one ever said you had to do testing this way. I’ve done exploratory testing from my experience with the product, but also users and usage because I drifted into QA—a formal QA role—from tech support.

So I felt like I had a real bead on what customers were trying to do and what our products were for. And I didn’t think I needed a lot of guidance about that. But I still needed to structure my own work even though I was “a giant test team of one.” So I built out spreadsheets. And in the spreadsheets, I would list out the features kind of in priority order, and then, I created my own test design technique, “Dawn’s Rule of Three.” For everything I wanted to evaluate, I wanted at least one positive test, at least one negative test, and at least one crazy test. So as long as I covered those bases, I felt like I shook out what was relevant in the feature or the function and I had some level of knowledge and confidence about it, if those tests passed.

So for me, the “negative test” is the contrary test putting in invalid and seeing if I get an error message or something like that. I want to make sure it can handle things. And a “crazy test” is more like an “out-of-bounds stress test”—pushing something to the limits. Those things won’t happen every day, but I believe those things will happen in production, and it tells me something about the robustness and the reliability, the stability of this thing which just gives me another way to evaluate it. I couldn’t necessarily say it was good or bad. I just say I had knowledge about it. Because on that particular project, we had no requirements, no project manager, no status meetings. It was truly run in XP and sales-driven-development. So we would ship when we thought it was good enough for the customer who wanted it. That’s kind of how it went.

And out of that, I’ve kind of transitioned my little shorthand for tests into mind maps. So now, I would build a set of test conditions or a test inventory using a mind map instead of writing step by step test cases. Let’s say we’re testing a withdraw of cash. What would be important to test regarding the withdraw cash functionality? The inputs, the withdrawal amount that you’re requesting, but probably the most important thing is the outputs. What’s the money coming out? What about the receipt? What about the posting to the account on the bank system, right? What about limits? Daily limits, transaction limits, insufficient limits, right?

So if I start making this list, I get this roadmap of things to evaluate and I can engage my stakeholders and say, “All right, is this list what we need to evaluate? Yes or no? Is there anything missing?” I put performance up there because I’m a performance aficionado and some team member could shoot me down and say, “Nope, we’re not evaluating performance this time. It takes time, it takes money, it takes effort, and we’re not committing to that right now.” Excellent. It’s off my plate. Now I don’t need to worry about it. But if I didn’t do that, if I buried that stuff in test cases and then asked people to evaluate them, I wouldn’t get that feedback.

“Let’s record or capture the last test we run, and if we need to put it in step-by-step fashion in a test tool or a document, then let’s do that as part of our exit criteria. And I think those things can be accomplished and still enable our agility throughout the sprint.”

Then, the last thing I want to do once we’re sure the list is vetted and we’re comfortable and committed to what’s there, let’s prioritize it, so I can figure what intensity of testing makes sense. That’s how I’d roll in just about any and every project I could, and I think it’s very appropriate for agile. It’s lightweight, flexible. I can make changes really quick. And look, if I needed, say I’m in a regulated space. For compliance purposes, to have test case documents, let’s do exploratory testing right up until the very end. Let’s record or capture the last test we run, and if we need to put it in step-by-step fashion in a test tool or a document, then let’s do that as part of our exit criteria. And I think those things can be accomplished and still enable our agility throughout the sprint.

Noel:  The mind map reminds me, we had Michael Bolton come out and teach us his Rapid Software Testing class, and the first task we got was to create a mind map for the testing of a wine glass. We had so long to do it, and I remember thinking, “It’s not going to take us 30 minutes to write down all the tests for a wine glass. It holds wine or it doesn’t.” Does it fall over? How easy does it break kind of thing? And when we finished we had come up with not only so many more tests than I really thought we would have at the end. Our group actually had more than any of the others in the room. But as every group went by, everyone had something the other groups had totally forgotten about needing to include.

And then, Michael showed his mind map, and the zooming out that he had to do to even have it all fit on a movie screen was enormous. And obviously, he should be better at this than we were. But it was a really great way, like you just said, to have all those things visible, and then to be able to prioritize them, cross out entire branches if they’re not something we’re testing at this time. “We’re not making the glasses out of wood this time around, so we don’t need to do all those tests on wood.” Or, “We’re not shipping more than six at a time so we don’t need to test a box that holds 50 glasses,” that kind of thing. But it was really interesting to see it in mind-map-format as opposed to a list of test cases that you then spend time just figuring out how to read.

Dawn:  Makes sense.

Noel:  And then to have to try to figure out what they’re actually going to accomplish.

So in your session this morning, you also stressed the need for testers to focus on really understanding what is really needed by the customer. I know a lot of testers, developers, marketers, whoever, are being told that what you need to focus on are the business goals—which, in the tech world results to simply how to sell more software. But I think those are kind of one in the same. You’ll sell more software by focusing on the customer. By making sure you’re actually making what they want, that you’ve tested it, and that it does what they think it’s supposed to. So, for testers, it’s really a shared goal where I feel like in the past people viewed it as “you’re either focused on the customer or you’re focused on sales.” But it seems to me, again, like these can be one and the same.

Dawn:  That’s a really important challenge for us to deal with. I know on my first project where I had an official title of Quality Assurance Engineer, I’d done lots of testing before that, but I believed my commitment was to the customer. And then, we hired a project manager. Now, he was a shiny new MBA just out of school, never worked on a project before. He bounces into the test lab one day, says hi and introduces himself. And I look at him slyly and like, “What do you want?” Because anybody that was in my lab was looking longingly at my Unix boxes because they wanted some computing power. And so, I was pretty protective of that stuff.

He said, “I want to know how it’s going and when you might be done.” And, shock, fear, panic, everything just came into my sphere and I was like, “Dude, I’ve got a gazillion configurations to test. I computed it a few years ago, conservatively over 144,000 configurations to test.” And I was a test team of one and we were getting a build a day. We internationalized. We translated our product user interface into French, German, Italian, and Spanish. And we did OCR into 17 languages. Now, I didn’t have to test the 17 languages. That was tested before it came to me. But I needed to test all the different language supports, scanners, windowing systems like open, look in motif, or different Unix platforms. And that ends up being a lot of stuff.

So I just looked at him and said, “Never.” And then I asked him to go away because he was making me feel uncomfortable. I couldn’t answer his question, dammit. And I was really frustrated about that. So the next day he comes in and asks the same question. I’m like, “Dude, did you forget the conversation we had yesterday? I don’t have a better answer. It still sucks in here. And by the way, I’ll be here until the sun explodes or you pry this thing out of my cold, dead hands. So I’ll be doing testing. We’ll be finding bugs. We’ll be fixing things. But I didn’t have a clear vision on ‘done’.” That was missing for me. Because I thought the job was “quality” and I thought the job was “excellent.” We’re trying to hit that imaginary bar. And if we did, we’d never ship software. But back then, I didn’t have that vision of tying those two things together.

So the third day, he came into the lab, and I just…I pointed and I said “You, out. And you’re banned. You can never come back.” And so, we didn’t find a way to work well together. He still talks to me. I don’t know why. So look, neither one of us could figure it out in that moment. We weren’t supporting each other. We weren’t serving each other. But here’s the deal. We don’t test for testing’s sake. We test for the project’s sake. So if we’re not aware of the project goals, we could be running the wrong tests. We could be coming … We could be acting like a testing boat anchor holding it back when there’s no reason to hold it back.

And in retrospect, I was that boat anchor on that project. Now, if I’d just gotten my head out of my butt long enough to think about “What are we really trying to achieve here? Who is our customer? What do they want? And what are we trying to achieve?” I already knew the answer. I came from tech support. So think about it. Who the heck on planet Earth uses OCR software on Unix? That’s not a commercial product really. I mean, we sold it as a commercial product, but no one bought it. I mean, very, very few customers bought it. But our biggest customer was the US government. The NSA, the DoD, and all the other three letter agencies that shall not be named. And I supported those people, so I knew exactly what they were doing. They worked in lockdown facilities underground, no windows, no hard lines, no phones.

So when they had a problem with the software, they had to go home. We didn’t even have cell phones back then. They had to go home and call us for tech support. And if we needed to send them an updated version of software, we needed to send it to someone’s house. So I knew what it was like to support them. And I also knew what they were doing. They were technically spying. They’re OCRing lots of documents in other languages because I remember asking one time. We were troubleshooting a problem and I said, “Can you send me a document,” and they’re like, “Of course not.” And I’m like, “What do you mean? How else am I going to troubleshoot this problem?” And they’re like, “Oh, no. It’s classified.” Really? We can’t get anywhere here. And in my entire time there, five years I worked at that organization, I got exactly one document from them that was about 98% redacted. Big, black marks through everything. But it was helpful because I saw the letter “A” and the number “1” and at least I knew what font we were dealing with.

So, if I just thought about that customer, and their needs, and their constraints, and our revenue goals…for example, we were just waiting to realize revenue from them. If I had thought about that for one minute, the first time new project manager dude walked into the room and asked me, “So when’s it going to be done?” Hands up in the air, “Woo-hoo! Take it.” It’s good enough. I don’t need to keep digging for bugs. It’s bloody good enough.

And I feel a bit awkward about having learned that lesson so much later than the moment it was all happening. And it was because we were both so dysfunctional in the space, and there was nothing to help us. But that’s a big clue-by-four. Right? That I could have said right then, “It’s ready to go.” So to me, that’s the biggest awareness that I got. There’s a balancing act. There are stakeholders that we serve. We serve the organization. We serve the project stakeholders and their decisions that they need to make. So if we’re not doing that, we might be a little off kilter in where we’re angling. So, that’s how I got there. And I try to get other people there.

Noel:  That’s right. I had a project I worked on recently. Not a testing project, just a campaign that I was running, and I knew that I was doing it inefficiently at times. But there wasn’t time to look for how to do it better. I would say to myself, “I’m just going to keep doing it this way. It is getting done, if only bit by bit. I’m grossly behind on everything else, but if I stop to start researching where to find this solution, I don’t have any idea where to even start looking. Then, nothing gets done.” So I just willingly did it poorly, maybe not poorly, slowly. It came out really well. But next time, I’ve got to start looking for the solution now or else I’m going to try to go finish all this work I didn’t do at the time. It’s going to come back around because it’s an annual campaign. And I don’t want to end up saying, “Crap. I’ve got to do it the slow, inefficient way again.

You had a slide that urged testers to ask, “Is the software ready?” Not, “Are the tests done?” Our CMO constantly preaches this same message. His definition of “ready” is, “Does the release candidate have an acceptable level of risk?” I went and saw a session on risk-based testing yesterday, and I think that risk-based testing is one of those things where I feel like possibly all testing is going to become “risk-based.” It will have to. When you start thinking about how quickly you can be disrupted, put out of business, whatever you want to call it, don’t you have to continuously evaluate the testing you’re doing to make sure you’re helping decrease risk? It seems like you really better be thinking about it that way.

Dawn:  Right. This is one of the things that I cover in training classes. And even when it’s not a major topic for the course material, it comes up for discussion. Because we’ve been plagued by requirements-based testing in the industry for a long time. And when I started working in the tools space, especially around test management tools, it was very obvious very quickly that people were trying to take advantage of traceability. Right? I have tests. I trace them to requirements. And then, I have orphans. I have tests that I want and need to run that are valuable and headless because they don’t tie to requirements for the software. They tie to other concerns, very global concerns, project level concerns, intangibles. Like just a whole bunch of things that we wouldn’t give up.

So I become aware very quickly that the reports that this test management tool was generating around requirements-based coverage, was not going to tell the whole story. So back then, I formulated a pitch around let’s understand test plan coverage which is bigger and broader than requirements coverage. And if we do that, then we don’t have these holes and gaps. And the risk is that you’ve got a project manager or team that’s going into the test management tool and only looking at the requirements coverage report. So I thought it was important to have that level of education then. And here, it’s almost 20 years later, and we’re still in this space struggling to get away from requirements that are too myopic.

“So if I can get people to talk about studying their bugs, and have a class of tests that is directed at making sure the most important bugs that we don’t want to have aren’t present in the places that we care about, that is the essence of risk-based testing.”

And this is for a couple of reasons. You talk about risk in one category, or one area of that risk is, “What bugs exist in this system?” And I can go hunt for bugs. I can use software attacks. I can use error guessing. I can use exploratory testing to specifically look for certain kinds of bugs that we’ve had before but our requirements aren’t going to lead us to those tests. So if I’m thinking about the risk of a certain bug existing, and it’s my goal to avoid having that bug escape into production, then that avoidance strategy is a risk mitigation strategy. So if I can get people to talk about studying their bugs, and have a class of tests that is directed at making sure the most important bugs that we don’t want to have aren’t present in the places that we care about, that is the essence of risk-based testing.

And I think that is a shift that we need to take on big time in the industry. I don’t know how we get the stakeholders who are tied to those requirements, especially when we do software by contract. Because we’re negotiating around the product features and the requirements or the stories or whatever. We’re encapsulating as the functionality that’s going to be built for us. And we often don’t know how else to talk about it.

Dawn:  So I encourage people to have conversations about acceptance criteria, which, in my mind, is a measurement of quality factors that matter. And that level of risk is in there. It’s like it’s showing up as what is the acceptable risk or a definition of minimal viable product or something like that. There’s lot of ways to identify it if we’re looking for it. And when we’re onto it, then we can ask the probing questions that we need answered so we can translate those things to tests. Or say, “Testing is not going to help you mitigate that risk. Go find another channel for that.” So I try to have it be ever-present in discussions and try to help teams shift in that direction.

Noel:  That’s really cool. So, last question, and not to try and put you on the spot, but I’m really curious to get your opinion on one of the “modern testing principles.” For anyone listening who isn’t aware of those, they were written by Alan Page and are causing some confusion in the testing community because there’s… some harsh-sounding ones in there. One of them suggests that testers, “Embrace reducing the number of testers or even eliminating the need for a dedicated testing specialist.”

There was someone at the CAST 2018 testing conference, who, during the lightning talks, tried to bring up what does everyone think about the modern testing principles with about a minute to go at the very end. There was just no time. And I felt so bad because that could have been the theme of the whole week. It’s such a big topic.

Another one of the modern testing principles that I thought of during your session this morning is, “Believing that the customer is the only one capable to judge and evaluate the quality of a product.” And like all these principles, I have some issues with them. Some parts of them I like, and some I don’t. But I think it’s really fascinating, because we might already be believing in that last one whether we know it or not. If we’re basing “success” on things like the amount of software sold, less support tickets being filed, customer references, customer retention, etc., we’re allowing those kinds of numbers to evaluate whether we’re doing a good job—and those numbers are all coming from the customer. It’s not just us saying, “This software is great,” and then not having the customer data to back it up. So are we already kind of looking at customers as being, I don’t know if it’s the only one, but capable of being the evaluators of our quality?

Dawn:  Well, this is a big topic and in almost every class I teach I throw out the question, “What is testing?” And then follow it up with, “What is quality?” What’s your definition? People stumble all over the place trying to come up with a definition of testing. And it’s like, your job. If you can’t figure out what testing is then we’ve got problems.

But when it comes to quality, people just kind of stare off into the distance like, ”I don’t know how to describe quality.” But it’s like the Supreme Court was talking about pornography. “I don’t know how to describe it but I’ll know it when I see it.” Right? I feel like it’s an intangible in some way. And so, I did some research on it because I wanted to have a good discussion about it in class and talk about the vagueness of that quality space.

And so, I typically shift the question a little bit and say, “All right, can you tell me what is your opinion about a quality car? What’s a feature of a quality car? Or how would you evaluate a quality ice cream or something like that?” So if we started talking about cars, some people will say fuel efficiency. Some people will say safety ratings, or crash ratings and things like that. And I’ll say, “That’s beautiful. That’s all really good stuff.” And I care about a sassy look and vroom, zero to 60. And actually, the two most important things I care about are comfort because I have a herniated disc in my neck and some lower back issues, and I take lots of road trips so I need comfort.

Just imagine that I was going to write a set of requirements for a new car that I wanted somebody else to acquire for me and I was going to give them $40,000 to do it. Could I describe the quality factors that matter to me well enough for them to drive the car of my dreams up to my house, and for me to say, “Yep, you nailed it.” I doubt it. I just doubt it. And yet, we do this in software projects and think it’s all going to work out.

Part of the research I did is having people describe quality especially related to our industry in software. I found Phil Crosby’s definition of quality, which is, “Quality is conformance to the requirements.” And I’m like, “Huh. Okay. Not exactly sure what I think of that, but alrighty.” Maybe that’s verification. Maybe that’s, “We did what we said we were going to do.” So maybe that’s also a notion of internal quality. We have a coding standard. We follow Window’s standards. We follow database standards or architectures, or whatever. So if you have internal standards that you’re meeting, that is a notion of internal quality. I’m not sure the customer cares about it, but if we were going to get a sticker on our box or a banner on our website that says, “We comply with this,” and that was going to mean something to some segment of our customers, then that becomes an external notion of quality because it’s a value-add to them. But it’s still an internal notion of quality to the software.

The other one I really, really like is Joe Juran. He, at one point, was the head of the American Society for Quality. He’s also responsible for bringing Japanese manufacturing techniques like Kaizen and Kanban and things like that to the car manufacturing industry. His definition of quality is, “Fitness for use.” Fitness for use, and fit for purpose goes more along the lines of that principle that you were talking about, that the customer defines it. So user needs and expectations. If they’re not met, we don’t have a thing.

So I believe those two things are two sides to a testing coin. That we need to do verification and validation or we don’t really know what we’ve got. And so, I try to weave that into every conversation about quality with a caveat. Jerry Weinberg says, “Quality is value to some person.” So who is that person? Is it your stakeholder? Is it your sponsor? Is it the person who is ponying up the money for this project, or this endeavor? Is it Gartner or some other industry leading organization that’s going to evaluate you and leave an impression in the market? Is it your stakeholders? Is it your end users?

I think that quality is in the eye of the beholder, and unfortunately there’s not just one in the mix. But there is one that, I think, is ultimately important. If we don’t serve the customer, if we don’t meet their basic needs, then I think we have a big miss. And I think we have a big risk. So I always want to think about users and usage and get them into the conversation. I want them to evaluate early and frequently in a spiral, iterative way. I want to check in, because I believe if we go for all internal notions of quality or we follow our beliefs, then we will build something that serves us and doesn’t serve them.

There’s a book out there called Why Software Sucks written by a guy named by David Platt and it’s delightful, in my opinion. But two thirds of the book is a rant at his budding developers that he teaches at, I think, Harvard Extension in the Boston area. He basically says, “Your users are not you.” Don’t write software for you. Because if you write software for you, you might miss the mark in terms of what’s important to them. So be careful about what you do and if you don’t know their job, and you don’t know their function, and you’ve never met a user, I’d say you have an interesting gap in your knowledge and a potential risk that could matter.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

X
X