Software testing explained in 14 minutes by Tricentis Distinguished Evangelist Ingo Philipp:
Here’s the full transcript.
Software Development is Like Wrestling a Pig
Software development is a lot like wrestling in the mud with a pig: after a couple of hours, you realize that the pig likes it.
To make it clear, the software is the pig. And testing? Testing is all about washing that pig. As we all know, that can be messy, really messy. It has no rules. There is no clear beginning, no clear middle, and no clear end. It’s kind of a pain in the ass (sometimes) because when you’re done, you’re not sure if the pig is really clean or even why you were washing that pig in the first place. Well, that might the reason why so many people have severe problems properly defining testing.
Types of Software Risks
Next let’s see how we usually wash that pig, how we test software. As we all know, there are a lot of risks in software. By risk, I just mean anything that can potentially cause a loss of something of value. This can include loss of customer loyalty, financial loss, reputation damage / brand erosion, and much (much) more.
These risks are then actualized when we release the software, and (in turn) these risks are realized as issues. These issues then usually manifest themselves in many different flavors to our end-users when they use the software in their daily business. These issues are, for example, functional issues, performance issues, reliability issues, scalability issues, security issues, safety issues, usability issues – among many more.
So, once we have released the software our end-users (usually) encounter loads of issues related to these software quality attributes. So now, the big question is: What do we need to do to reveal these issues before they slip into our production environments?
Exposing Software Risks
Well, first imagine that this black rectangle represents your software product. It represents the entire functional and non-functional spectrum of your software product. So, that’s all there is – and that’s all you could possibly know about your software product.
Now, the green rectangle represents all you currently know about your software product. And (no big surprise) that rectangle is much smaller than the black one because I am convinced that it is virtually impossible to always know everything about your software products down to the very last detail.
Next, the red rectangle represents everything you repeatedly check (e.g. by test automation). Again, you will never be able to check everything you know due to the time, resource and budget constraints you are confronted with every single day.
Well, now imagine you could possibly do that. Imagine you could check everything you know at a given time. Would that make sense at all? In my opinion, no! When you focus solely on codifying (automating) all your knowledge, how would you then create new knowledge? Well, you couldn’t, and you shouldn’t even go for that. You constantly need to create new knowledge about your software, since the software itself constantly grows – and you need to close the gap between what you know and what you don’t know. By learning about your software product, you can reveal problems that are outside our checking regime. That is what testing (at its core) is really about.
Testing is all about closing this knowledge gap by exploring and by checking. Cem Kaner put it this way. He said that a test is just a question that you ask the software. The purpose of running a test is to gain information, and so testing is, always has been, and always will be a search for information. It’s a search for the purpose of discovery. And that’s absolutely fantastic because it leaves so much room for our testing expertise!
So, the core question is how to balance checking and exploring? How to balance formal testing (checking) and exploratory testing (exploring)?
Formal Testing vs Exploratory Testing
So, let‘s compare formal testing and exploratory testing to make sure that we are speaking the same language. Formal testing (testing that is fully specified) is a very mechanical approach to testing, especially when it comes to test automation. That’s because automation is just doing what automation does: It just processes pre-defined data through your application in a pre-defined set of steps. That’s all test automation tools are doing.
On the other side of that fence, exploratory testing is more like an intelligent approach to testing. It’s all about learning the software, designing tests, executing tests, and interpreting the results at the same time. You do all of this simultaneously, and in doing so your next test is always influenced by the results of your last test.That’s the reason why we call this approach to testing exploratory testing. This, in turn, implies that the purpose of formal tests is to monitor known risks: confirming what we already know. On the other hand, the purpose of exploratory testing is to analyze potential risks.
With exploratory testing, we should focus on things that we don’t know, we should focus on illusions we are holding true without any empirical evidence. This then implies that any exploratory test has a high information value associated with it because we have learned something new about the application. That’s simply not the case with formal tests.
You might be thinking: That’s easy – so let’s just prioritize all of our tests according to their information value. Well, that’s one way you could do it, but it’s not the only way. Beyond the shadow of a doubt, you won’t be able to achieve comprehensive risk coverage with exploratory testing alone. That is what you can achieve with formal tests.
Now, the bottom line here is that formal tests are change detectors. Why, you may ask? Well, imagine you add some new functionality on top of your big fat software product, and you want to know if that has some bad impact on your existing functionality. What do you do in that case? Well, you most likely run your automated regression test set. That’s it. No more, no less. On the other hand, exploratory tests are more like problem detectors. It’s all about exploring the unknown, the invisible, to avoid the unthinkable happening to the anonymous.
What is Checking?
From this, one can easily infer that formal testing is all about checking. This just means that with formal tests, you evaluate your product by applying algorithmic decision rules to specific observations of your product. So, formal tests just provide an answer to the question: “Does this assertion pass or fail?” A formal test provides a binary result – true or false, yes or no, or 0 and 1.That’s it. It’s a check, and this check is machine decidable, and so it becomes obvious that this just requires stupid processing.
So, checking is just something that we do with the motivation of confirming existing beliefs. Why? Well, when we believe something to be true, we verify our belief by checking. We check when we’ve made a change to the code and we want to make sure that everything that worked before still works. So, when we have an assumption that’s important, we check to make sure that the assumption holds.
This can be done by machines, or by humans. When it’s done by machines, we call it machine checking. When it’s done by humans, we call it human checking (or formal manual testing). So, human checking (from this perspective) is just stupidly banging on the keyboard to process pre-defined data through your application in pre-designed steps. Therefore, it should be crystal-clear that we need to reduce human checking to its absolute minimum. It dramatically slows down the entire delivery process. Moreover, there’s little fun or creativity in it.
What is Exploratory Testing?
In contrast to checking, exploratory testing is a way of thinking much more than it is a body of mechanical checks. It’s all about figuring out if there is a problem in your software rather than asking if certain assertions pass or fail. It requires the application of a variety of many human observations, such as questioning, study, modeling, inference, and many (many) more.
So, exploratory testing means evaluating a product by learning it through exploration and experimentation in order to close the gap between what we know and what we don’t know to reveal problems in our software …
Testing vs the Product Idea, Product Description, and Actual Product
…and in doing all that, bear in mind that when we test our software, we don‘t just compare the actual product to its specification. We also compare the product to its product idea. So, we don’t just focus exclusively on verification; we also focus on validation. We don’t just ask if we are building the product right; we also ask if we are building the right product (which means that we also evaluate whether the product meets the end user’s needs).
The reason why we validate the software is that bugs usually start their life as errors in our minds. However, most testers never look for bugs there. Many of these bugs are predictable since they’re caused by cognitive biases. So, we already hunt for these bugs during sprint planning, backlog refinement, and every time we talk to developers, product owners, operations, or business experts.
The Agile Testing Equation
Don’t get me wrong. This doesn’t mean that we value exploring over checking. That would be too simple. It’s not just about the one or the other; it’s about both, and it’s about both at the same time.
We believe that something is thoroughly tested when it has been checked by efficient formal test automation (based on a solid test case design) and when it has been explored by the richness of intellect of human beings (by testers). That’s our agile testing equation. It means that testing contains checking in the same way it contains exploring. This equation can be considered a conservation law for these two testing flavors. Given that, our mission is to find the right balance between checking and exploring – since that is what testing is.
Software Testing Lesson #1
So, having said all this let me now briefly share the top lessons we have learned. Number one, James Bach. He nailed it: he boiled testing down to its essence by saying that testing is not about creating test cases. It’s about performing experiments, and this implies that testing (and thus exploratory testing) is any testing that machines can’t yet do (since machines just check – they do not think). That’s great for software testers, since this implies that testing is a human activity that requires a great deal of skill to do well…
Software Testing Lesson #2
…since a test itself doesn’t find the bug. A human finds the bug, and the test (as well as the test tools, if applicable) just plays a role in helping the human to find the bug. So, a test case is just an extension of a test idea. What matters most is the test idea, not the test case. This means that the quality of testing is increased by the quality of your test ideas, not by the number of the artifacts you create…
Software Testing Lesson #3
…and this then implies that focusing solely on checking is simply insane. Automated checks miss the same obvious things every single time. No matter how frequently you run them, those checks still won’t alert you to issues that they’re not already prepared to detect.
Software Testing Lesson #4
Moreover, we don’t need humans doing something a machine can do. We want human testers doing exploratory testing because exploratory testing is a creative endeavor in which human testers explore the behavior of the system in an attempt to characterize its behaviors (both documented and undocumented).
Software Testing Lesson #5
If you want to become better at testing, then don‘t hire somebody who is better at coding. I fully agree with Steve Watson, who once said: Better to have a person who can look at a requirement and work out what needs to be tested, than a person who can code but has no clue how to test something.
Software Testing Lesson #6
So, testing as a human activity has a future. It will always have a future [until some general purpose AI (like Skynet) takes over the world],. Testing is not so much a thing you do; rather, it’s much more a way that you think.
If you’re thinking like this…
…then you might want to rethink your definition of testing.
Agile Defined For Software Testers
Learn what the Agile development methodology involves and how it impacts software testers. Read more.