Accelerate Keynote - Wolfgang

Continuous Testing

Podcast: Wolfgang Platz on Enterprise Continuous Testing

Tricentis Founder and CPO, Wolfgang Platz, recently joined Mitchell Ashley on the DevOps Chat Podcast to talk about all things Continuous Testing — including Wolfgang’s newly released book, “Enterprise Continuous Testing.”

The discussion explores the need for expanding dev-testing with higher-level integration, system, and end-to-end user experience testing, which are often performed well by shared testing and operations teams.

The following are a few excerpts from the discussion. The complete transcript is available on DevOps.com.

Test Automation vs Continuous Testing

Ashley:  I know continuous testing, CI/CD—all of that stuff is certainly at the heart and center of DevOps and everything kind of starts there. Automated testing is, of course, a key part of that. So, certainly, there is a need for this topic. Why did you decide to write a book?

Platz: So, what I had to acknowledge in talking to customers and coming up with this product was, software testing is treated in a lot of organizations like a stepchild. And I mean, everybody is aware of the relevance of development, don’t get me wrong, and everybody is aware of the good things that IT can bring to use.

But software testing is kind of living in the shadows, and that was one experience. And when I then talked to customers, I found out that their ask and their intuitive need is always kind of the same. They see that software testing is here, they to some degree accept that there is a need for software testing, but all of a sudden, they say, “Well, I mean, it’s a lot of effort, so we want to have this automated.”

Ashley: Mm-hmm.

Platz: And the first reaction to dealing with software testing in a more professional way is, “Let’s automate that thing.” And automation is a super element aspect of improving software testing, don’t get me wrong. However, bringing large enterprise customers up to speed on software testing is actually more like a transformation journey. It is a change agenda you need to go through.

Because it’s not just about automation. A friend of mine said automation is just doing the mess for less, right?

Ashley: Mm-hmm.

Platz: But there is so much more potential in optimizing software testing when you have a more comprehensive look at it, when you have a clear understanding what is the business value of each and every functionality you want to cover with tests. What is the concrete need for test cases in order to address the business risk that may be within certain functionality, and then what is the right strategy of executing these tests? Are you going to run them through user interfaces? Are you going to run them through APIs? Are you going to run them with the use of decoupling, which service virtualization enables you to?

Only if you have that comprehensive perspective, then you can make continuous testing really work. Otherwise, it’s just going to be little things, little improvements that certainly help, but don’t untap the full power of a better software testing world, and that is why I’ve been coming up with this book. I think it is overdue to make people aware—“Wait a minute, it’s not just jumping about onto the automation train. This is not going to be the full story.” And you’re going leave a lot of stuff behind if you just think that way. Does that make sense, Mitch?

Ashley: It totally does, and I think if I kind of pull a couple things out of what you described, one is—automation is part of it, but continuous is also another aspect of changing the whole paradigm of how you think about testing. Because I remember it wasn’t that long ago where you had a QA team and a development team.

Platz: Correct.

How Quality Roles are Evolving

Ashley: And of course, that created automatic tension because the QA team tried to find every problem and what has to be fixed before it gets released and change control meetings and all that stuff. It was highly manual, even if some of the testing was scripted. But now, in this world of continuous integration, continuous testing that’s automated you know, it’s less an argument between teams and people and that friction and more about how do we make sure we’re automating the right things? How do we make sure that we escalate when problems need to be fixed, when builds break and tests fail and make sure that we’re testing the right kind of functionality?

So, it’s engaged the developers—and tell me if you agree with this—it’s really engaged the developers much more heavily in the creation and embracing that testing rather than being a separate function someone else does. True?

Platz: Yeah, absolutely. I mean, you pointed it out already, Mitch. In former days, it was like Dev, Test, and Ops providing different teams to software development life cycle. And what has happened, though, is that these big, we call them test centers of excellence where you would have a large number of manual testers, usually concentrated with some management capability, a little bit of automation maybe, even.

This piece, these large TCOEs, these test centers of excellence actually, they have either vanished or they are—or everybody’s questioning themselves if they should keep going with this model. And fully right so. Because what we now see with the need for speed in development is that the cooperation between Dev and Test needs to be so much closer, needs to be so much more fluent.

So, what we see is that these test centers of excellence are torn apart. And what we call out there as a recommendation is that you, if you have a very slim, so to say, team, a best practice team or we call it digital TCOE, that on one hand has this tight connection between Dev and Test, but on the other hand, provides some guidance on how to do testing in the best way.

Ashley: Mm-hmm.

Platz: And we see a lot of companies switching into Dev Testers where you have this hybrid form of developers and testers established directly within the agile teams.

Ashley: Mm-hmm.

Platz: And I think that’s certainly the right way to do. However, on the other hand, we want to be very much aware that complex system landscapes require higher levels of integration testing and some flavor of an E2E user acceptance test, which tends now to be forgotten when people go into an agile and DevOps practice.

Shift Left—and Right

Platz: So, I think—yes, we see the TCOEs erode. They got into Dev testers, which makes a lot of sense, but please, guys, be aware that there is a need for higher levels of testing which has not vanished overnight, and someone needs to take care of that. What we see is that these needs tend to shift more towards the operations team towards a kind of shared services team now in large enterprises, but it is a, what I would say contra streaming, like the shift left pushing testers into development. We now see kind of a shift right movement also, which makes sure that some higher levels of test are still covered in these shared services organizations.

Ashley: Yeah, it’s interesting you pointed that out that way, because you know, you do hear a little bit about shift right and you think about the world that we’re operating and it’s much more complex, because we’re operating in cloud environments, maybe doing cloud native, maybe not, but we’re working maybe in multiclouds, private data centers, private clouds, et cetera.

All these combinations of environments—and, you know, some apps might be in one place, some might be in multiple. There’s a lot of permutations of that environment that an Operations team is going to say, “How do I know this stuff is going work, and when it breaks, how do I know you’re going be able to fix it quickly? Diagnose it, test it, and apply the fix quickly.

Platz: Absolutely.

Progression Testing vs. Regression Testing

Ashley: And that automation is also the automation of—yes, unit and functional tests, but also getting into being able to do some performance regression testing that may be specific to that environment, maybe even specific to an iteration release or backing up and reapplying changes or backing out changes.

So, it’s in this complex environment that relying on manual processes seems like almost an impossible task. You have to have automation, you have to have a philosophy about doing testing. I hate to wax on here. Maybe I’m singing your song here, [Laughter] but that seems to be the environment we live in and that we have to figure out the best ways to do it. Am I on track here?

Platz: No, no—that’s cool. And I’m glad that you mentioned regression, because what we have to acknowledge is that the entire mindset of agile development, per definition, is progression rather than regression. And that is good, because agile development is about pushing out new releases, pushing out new capabilities, new features, new functionality faster than ever.

But what that mindset means is that all the agile Dev teams and the Dev testers that are within the agile teams, they also have this progressive mindset. It means that yes, if you work with them, they will accept the need for unit tests, and some of them will create great unit tests. We have seen this within our customer base. But at a certain point of time, when this is shifting towards regression, then you might find out that having unit tests set up once is one thing, but keeping unit tests up and running is a different game. And what we see is that as soon as you shift from progression into regression, you need to have people with a different mindset that really accept regression as the key aspect of their software testing behavior. If you don’t do that, you, time over time, are going run into issues in complex system landscapes.

And we see that, actually, I call it the evaporation of unit tests over time, which we have done a great survey with some Swiss banks, and we found out that having unit tests set up is something you rather easily can do. But over time, with more releases, you’re going see that their coverage, their potential, their power is eroding. And people are then referring more to the higher levels of integration to also catch these kinds of errors going forward, which makes a lot of sense.

But I just want to make you aware—don’t see agile development and Dev testers as the sole instance of your software testing. If you do that, you’re going run into issues in a complex, multi-system landscape with a high extent of regression with a heavy backbone with a true enterprise system landscape.

Ashley: So, Wolfgang are you saying that—you know, we used to say that you had to have separate QA teams, so it was people who didn’t create the software that could more fully test it and not, you know, testing your own software is hard because you assume things.

Are you saying that, even in this automated world, continuous testing world, we still need folks who have that sort of systemic thoroughness QA kind of approach to testing, even though we’re automating it, don’t strictly rely on developers to create a self-created unit and system test? Is that what you’re saying?

Platz: Exactly. That’s what I want to get to.

About Wolfgang’s Book, Enterprise Continuous Testing

Ashley: Good. Well, tell us a little bit about your book. You know, there are a lot of ways to approach a technical book about a technical subject. Is this a prescriptive “how to”? Is this kind of a strategy guide to lay out how you put together a full program of continuous enterprise testing in agile and DevOps? Tell us a little bit about the approach of how you went about this book.

Platz: The intent of the book is to provide a comprehensive overview of what you need to think about when you go for continuous testing. As I pointed out at the very beginning of our conversation is that just going for automation of your software tests is not going be the solution. You need to have a way more comprehensive look onto the subject.

And so, what the book really is going do, it’s going to introduce the perspective of a value/risk-based testing to you, which is the foundation. Before we want to do a software test, we want to make sure that it’s worthwhile doing the test, right? If this is just a functionality with very, very minor relevance to the business, you might be good with just one test case, you know what I mean?

Ashley: Mm-hmm.

Platz: However, if it’s an absolutely mission-critical capability of your software, then you want to make sure that all the different flavors of use cases and all the different procedures that may be within this specific use case are really covered, otherwise it’s, from a risk perspective, not bearable.

So, having a clear understanding about the business risks-slash-the business value that is associated with each and every functionality that your software provides is the foundation. If you understand that clearly, you will be able to present kind of a map, your map of where you want to put your dollars into. And guess what? This map is not going to be just for your budget allocation, but it’s also going to be relevant for your reporting, because your reporting should go towards business risk covered and not just for counting test cases.

And guess what? Your management is going to love it. All of a sudden, they see that somebody has a plan on how budget is allocated in the software testing space. So, it’s real cool stuff.

As soon as you know that, you want to make sure you have the right test cases at hand. We’re going to give you an overview about different techniques of how to create the most meaningful test cases—meaningful in terms of what is the extent of additional business risks that I can cover with the specific test case, right? And when we have done that, we’re going to make you aware that there’s different approaches to go towards automation. You can go through a user interface, you can go through an API. When you go through this, make sure you decouple systems so that you always know exactly what is the cause of a failure.

Ashley: Mm-hmm.

Platz: And when you have come up with that, you want to make sure that you keep your test data stable, meaning that the system always has a reliable set on basic data at hand so that you’re not going to be surprised by failures of tests which are just a matter of inconsistent or incorrect test data.

So, all these things are going to be introduced in the book. Of course, not into the very, very level of detail, but I think at least in a way that you know what it is about and you have a good basis from where you can start digging further into stuff. But ideally, you take the book and you walk home with it and say, “Uh huh. Now I understand what it is all about.” And I’m not going to just go out there and jump on the next open-source framework for test automation and say, “This is going to be the holy grail of my whole journey,” you know? It depends on what you want to achieve.

Ashley: It’s about creating a systemic, prescriptive framework for how to do this kind of work, because testing is, it sounds like such an understandable word, but it’s also so overloaded, because, as you talked about, there’s lots of ways to test, whether it’s through the API, the interface, there’s security testing, there’s black box testing, and you brought up the whole topic if test data. It’s hard to acquire in the first place and you have to main that, you know, evolve it with the functionality and the application—

Platz: Oh, yeah.

Ashley: – part of the regression, also. It’s really a very complex topic, and you get into performance testing, load testing, sort of chaos monkey-style resilience testing, all kinds of things that can happen. And then behind the scenes, now that we’re automating so much, we have this data that’s super valuable that we can communicate to the business. We plugged in code in here and it made it into production, and here’s how we know it was thoroughly tested and made sure that it’s going to be reliable when we get it out in production.

Platz: Yes, and this is actually where the journey is going to go looking forward. What we’re going to see happening is that more and more tracing information, logging information from production will be fed back into the testing loop, right?

Ashley: Mm-hmm.

Platz: I’ve pointed out the business risk coverage. I mean, how do you get to a clear understanding of business risk? You get through that by knowing how often systems are used and what is the potential damage. These things—and nowadays, if you have proper logging—can be obtained from production logs. And these production logs can be fed into, back into the loop by now influencing your test awareness on specific functionality from the very beginning, from the development perspective already, into the higher levels of testing.

So, what we’re going to see is that data, the use of data, even with some artificial intelligence involved, is going to be a tremendous source for driving the further efficiency of the testing cycle. So, we’re going to see, actually, the ideas of the value-based testing in conjunction with having the right test cases at hand being even more optimized and speeded up by this data loop that we’re going to get into the future.

[Read the complete transcript on DevOps.com]

[Read Wolfgang’s book, Enterprise Continuous Testing]

Enterprise Continuous Testing