Your transformation toolkit

Advance your enterprise testing strategy with our transformation toolkit.

Learn more


Continuous Testing Live: Jenny Bramble on Risk-based Testing

On this week’s episode of Continuous Testing Live, Jenny Bramble sits down with us to discuss what risk-based testing looks like, why testers should consider a risk-based approach, and how to build buy-in and support from the rest of the business.

A full-transcript for this episode has been provided below.

Stay up to date on all the new episodes of the Continuous Testing Live podcast—subscribe today at iTunes, Google Play, or SoundCloud!

Noel: I saw you speak at a CAST 2018, and I noted where you said, “Defining terms allows us to communicate across cross-functional teams.” I know that the audience of this show is not only software testers, so if you don’t mind, let’s go ahead and define “risk-based testing” so that people know what we’re talking about and what we’re not.

Jenny: Awesome. Risk-based testing is testing for the risks of an application, use case, or feature. You’re looking to say hey, “What’s the most risky thing?” That can be a little bit weird. Basically, what is ‘risk’? Everyone’s got a different view. Everyone’s got this idea that they know what risk is, and a lot of time that’s not really the case. We all have a different idea.

For example, I like my coffee sweet. So does my sister. When I say ‘sweet’, I mean a half a cup of sugar, and she’s a teaspoon kind of girl. We had to sit down together and define what sweet meant so that she could make a cup of coffee that I wouldn’t reject.

Risk-based testing is similar. First we need to define risk, so that we can define what risk-based testing is. When I define risk, it’s the impact of failure and the probability that failure will happen. Both of these are kind of educated guesses as to what’s going to go on with your application.

Noel: We talk about requirements changing all the time, and how Agile welcomes changing requirements. How often should you expect risk to change? Are there ever any cases where it changes, and you need to make sure that everyone is informed that there’s been a change in what we’re calling risk and what we’re not?

Jenny: To address the last part first, the definition of risk isn’t really going change. You’re always going to have your impact, you’re always going to have your probability. What’s going to change is the values of those numbers. The impact may be more, the probability may be more, they may be less, they could be less. What you’re looking for there is to consistently keep up with what these things mean. I like to say, “Hey, how do we feel about this? Do we think the impacts are still the same? Do we think that the risks are still the same, or has there been a change we should discuss?”

Noel: So, I’m curious, how did you get into valuing risk this much? How did you come to the understanding that it would benefit you or your customers, or whatever that motivation was? I know that my boss talks about risk-based testing a lot, but when you look across a conference’s agenda, you have X number of sessions that are on automation, some that are on exploratory testing, now there’s a bunch of AI, but I don’t know that I’ve seen more than one that had “risk-based testing” in the title. It may come up in the abstract or in the presentation at some point, but how did you get into it enough to propose that this topic be discussed at multiple conferences?

Jenny: Classic Jenny answer on this one, I wanted to give a presentation, and I looked around at different topics and said, “Okay, what interests me? The company I was at at the time was in digital marketing, and we were having a lot of strange failures in our sprints. A lot of weird bugs would come up, and a lot of that was because we were being told what to test instead of deciding what to test.

So, I started looking into risk-based testing as a way to help the team decide what to test. As I started looking at it, I’m like, “Wow. This is kind of cool.” I did more reading, I started reading blog posts, I read a bunch of books and all this stuff, and thought, “Wow. This could really be helpful.”

I like presenting it at conferences because it’s not necessarily a methodology. It’s not a tool. It’s not tied to automated or manual testing. It’s a thought process. It’s a way to think, it’s a way to communicate, and I feel QA is very emotional. For me, especially. I’m an emotional creature, and this helps me express my emotions in a way that makes sense to the application. Instead of saying, “I feel bad about this.” I can say, “This feels risky,” and they know what I’m talking about.

Noel: That’s really great.

Jenny: I’m super-good at sound bites.

Noel: I’m kind of the same way, in that I think of myself as one of the more emotional people, in that things get to me, and things make me either worry or wonder. But that doesn’t always translate at a business level. I think it’s really interesting that risk-based testing is not as prominent on conference agendas as it could be, and I think that that’s probably going to change. Whether you believe testing is changing or evolving, whatever it is, business is starting to recognize testing as being a lot more closely aligned with what they’re tasked with than maybe they did in the past. And, seeing that testers have the ability to be focused on business risk, and still able to care about “people” and emotional things, is really nice to see. You can actually keep being “you,” but still have a much bigger impact to the business just purely from being recognized as capable of contributing to that.

Jenny: I feel like the emotional component of a team is really important, and a lot of times whenever you look at a team, you talk about, “Okay, I need a Java guy. I need a front-end girl. I gotta get all these technical things to build my team properly.”

But the emotional component is so important. You have to be able to be the user, to appeal to the user, to make software that will appeal to them. That’s one of the things I love about the place I’m at now, WillowTree: we try and make delightful apps. We recognize that emotional component and we say, “This is important. This is what sets up apart.”

You’ll get to a point, a lot of times, where people don’t understand it. They’re like, “Oh, that’s just Jenny being Jenny.” But when you start to show the value of the emotional component, the value of emotion in software testing, people are going to start recognizing that it is valuable, that it is something that’s irreplaceable on a team.

Noel: I love it. Something else from your session, I believe even in the title, was about that we’ve convinced everyone that risk based testing is important and that it’s worthwhile and that it’s something that testers can provide is this assessment of risk and an evaluation of what’s new that’s gonna cause risk, or something that’s always been around. But, you say even in the title of your session that you can’t test everything, which I’m sure makes management somewhat uneasy, especially if they believed that you can. Is there any risk for a tester who thinks that maybe they’re viewed that way, that they can and do test everything, even when they don’t, and how do they, how do you speak to management or executive level, whatever it may be, that not only can’t you test everything, but that that’s okay?

Jenny: That is a hard one. When I first started at my new job, we had a sign on the wall that said “Test everything,” and I was like, “I literally have a presentation that says we can’t do that. Can we have a conversation?”

This is another place where communication is important because when they said test everything, they didn’t mean every little teeny, itty bitty thing. They meant, “Make my client happy.”

A lot of times, when someone says test everything, that’s not what they mean. They mean meet all the requirements. They mean meet all my expectations. They mean make sure that the application does the thing and does it well. When you start to have to conversation around test everything, you’re not really having a conversation where you say “I cannot do that,” because that’s very negative and people will immediately get defensive and will be like, “That’s your job.”

Instead, you start talking about setting expectations. You start saying, “What do you expect from the app? How can I help you meet those expectations?” That’s what I said to my team. I’m like, “I can’t test everything. You’re setting an impossible goal. Can we instead say ensure the application meets all the expectations of our clients?” They’re like, “Oh, that’s nice. How do you spell expectations?”

That was way better and way more productive for me than saying “I cannot.” That’s saying, “I can do this, is that what you’re looking for?” That was pretty awesome for me.

Noel: That’s really good advice. Along the same lines of not being able to test everything, I know that you talk about risk factors that you can control or that you can test against and ones that you can’t. What are some of those things that are outside of your control or outside of your scope as a tester that organizations just have to live with? These are untestable or shining a spotlight on them, that’s unable to be done. What’s something that’s a risk that might always be impossible to detect?

Jenny: I gotta go with hurricanes on that one. There are a lot of things that we don’t have control over that we can’t really mitigate. I say hurricanes and I laugh, but we just had hurricane Florence down in North Carolina, and they were literally routing people around the state. We can’t get stuff, and that’s super risky if I was in a manufacturing company, that would mean I can’t get my product.

For testers, potentially we’ve got a, we have a lot of stuff we can’t really control that the business has to sort of just deal with, and those can be things like timelines, money, physical limitations, technical limitations. I had a client once ask us “Hey, can you add these six pieces of information to the app? We want them displayed.” I’m like, “Okay, yeah. Six pieces, sure.”

Well, it turns out the API we were using only gave us four pieces of information, and the client wasn’t going to change it. So, we could say “Yes, we’ll do that,” but then our risk is their API, and that’s something we can’t control. You can have that kind of risk you can’t control, the natural disaster one, sicknesses are absolutely something, and human limitations and the limitations of time. People burn out. People burn out hard if you work them too much, and the only way to control that is to work less, to be able to do less, and sometimes that’s not an option.

That’s sort of a rambly answer.

Noel: No, no, that was good. That was good. Going back to the positives, not just the things you can’t test for, but the things that you can and the things that you can evaluate the risk levels of, I know you talk about a risk matrix that can be developed. Again, kind of probably too long for a single short podcast, but how does that matrix, kind of like how you define risk early on or how you define a set of requirements, or a set of expectations, who all gets to contribute to what goes into that matrix? Who all kind of needs to be involved in that early on, and then who are some of the different teams that that matrix can be shown to where they can get the information from it that matters to them and their expectations?

Jenny: The easy answer is “everyone.”

Noel: Ha, okay.

Jenny: The hard answer is “not everyone.” Whenever you’re starting to develop a risk matrix, you need enough people to get a clear picture of the application but not so many people that the picture becomes muddy.

Noel: Right.

Jenny: Imagine you’re painting something, you’re making a painting. You need a certain amount of colors to make the painting look beautiful and vibrant and lively. But when you start having too many colors, you start getting too much, or I can’t take it all in. Unless you’re a modern artist, then, good job, y’all. But you don’t want to muddy it up, and that’s a lot of times what can happen when you get 18 people in a room. You’re going to have a discussion about one item’s impact of failure for 20, 40 minutes.

I like to pull in as few people as possible. I’m generally going to want someone to represent the business, someone to represent development, and someone to represent the customer. Those three people will generally give you enough of a picture to be able to start a risk matrix.

I think the last one I ran, I actually had five QA people, just QA people, and that was a little bit weird because we didn’t have a full picture of the application. What we ended up doing was we came up with something with those five people, and then presented it to development and business and said, “Hey, what do you all think?” That made it to a point where we could clear it up. We could say, “Oh, wow. Now we have a better picture because we have a starting place.”

Noel: I was curious, when you create one of those, if it’s the kind of thing where you’ve got a different version for each party that’s going to get information from it, or it one matrix where you possibly have the risk of, you have to put all the information into it. How do they kind of pick out what’s for them and what numbers are for other people, or is it just as simple as a spreadsheet where you just click on the tab that’s addressed to you when you create something like that?

Jenny: Oh, I think it’s one risk matrix for everyone because everyone sees risk differently. But, there’s one number. There’s one impact, there’s one probability. I’m going to see probabilities from a technical standpoint. I’m going to say, “Oh, the technical limitation on that is so bad.” The business is going to say, “Well if we don’t do it, we’re going to lose one million dollars.” That’s two different views of one risk.

Noel: Right.

Jenny: As we get more views, we’ll start to refine that and we’ll start to say, “Okay, this is the number that brings all of our views, all of our pieces, all of our strokes of paint into one painting.”

Noel: Lastly, you said that as you started to think about this, and that this was really cool, you went online and read some stuff about it. Do you have any advice for someone who hears this podcast or who goes to your session, goes back home, wants to start something like this up, where they can go and kind of get some more visual examples of what they can start to use to build something like this for themselves?

Jenny: Follow me on Twitter. I’ve got a couple of blog posts coming out. I think I’ve got one already up on TechBeacon that has a couple of these visual examples, and I’m going to have a paper coming out of the Pacific Northwest Software Quality Conference that will have a few more examples. I’m always willing to chat with people, always willing to give advice if there’s something you want to talk about.

Other great resources …I have a list of them somewhere. Pretty much anything out there is going to give you an idea of risk-based testing. You’re going to see a bunch of matrixes, you’re going to see something that works for you. A lot of times, I like to say that if it works for you, that’s right, because you don’t have to do it exactly like I do it.

Noel: Right.

Jenny: You don’t have to be exactly like me. You don’t have to be exactly like anyone, honestly. You should do what works for your team, your situation, and your particular life experiences. If that’s what I do, that’s awesome. I’d love to help you with it. If that’s something slightly different, you do you.