“Once you’ve got the ‘why’ and the ‘what’ down, then it’s much, much easier to find a suitable and efficient solution for the ‘how.'”

-Bas Dijkstra

On this week’s episode of Continuous Testing Live, Bas Dijkstra discusses what test automation is, what it isn’t, and why it’s so important that those who are teaching automation understand the difference—and can explain it to testers looking for guidance.

Don’t miss a single episode of Tricentis’ Continuous Testing Live podcast! Subscribe today at iTunes, Google Play, or SoundCloud.

Noel: Bas, thanks so much for sitting down with me today. I just returned from STAREAST and there was a pretty equal balance of sessions on future ideas around AI and machine learning. Stuff around the other end of manual testing through exploratory testing.

Then, all of the automation sessions that I went into, someone would ask the crowd, “How many of you are manual testers?” It was almost the entire room. They would say, “How many of you are doing automation?” It would be four or five people that raised their hands. Then, a lot of speakers would then ask, “How many of you are in organizations where you know that automation is going to be invested in more heavily?” It was almost the entire room.

This was a lot of testers who know automation is coming and they’re trying to get more answers as to what that’s going to look like and require from them. Since I know you and I both agree that the real return on investment for automation is not to replace these manual testers but to support them.

Since we know there are so many people out there wanting some validation of that, where do you see automation offering support? Where are some of those areas that those who are distrusting or even fearful of automation right now might not be aware that this is really something that really could benefit you a lot?

Bas: First of all, thank you very much for agreeing with me that automation is definitely not something that’s meant to replace testers and testing in general, but rather to support it. Where I see automation providing a lot of value is by taking away the repetitive tasks from the testers and putting them into the hands of scripts.

So, we are creating code either directly in an IDE or through some kind of sophisticated tool that basically allows testers to perform their tasks in a more efficient way, so to say. The way a lot of people still look at automation is through doing regression testing or regression checking, depending on what your definition of testing is. But, there’s so much more to automation than just executing those functional regression checks, which is what a lot of people, teams, and organizations are most heavily invested in when it comes to automation, but there’s so much more than that.

Basically, for me, if you were to ask me how to define automation, it’s basically every way in which you can use tools to enhance or support your testing activities. Executing functional regression tests or checks is part of that and is a big part of that, but it’s definitely not all of it. If you have some kind of tool that helps you generate lots of test data that you can then use for the testing that you’re going to carry out, that’s automation. Performance testing is automation. Things like service virtualization, for me, that’s all part of automation.

The way a lot of people define or look at test automation is so restricted to again the execution of functional regression checks or just functional checks, but automation is much more than that. If we start to look at automation in a broader sense, then I think it would quickly become more clear to people that there’s lots more that can be done through automation where instead of just replacing those tedious and expensive regression checks that need to be done at the end of every development cycle.

Noel: There as an article I read of yours where you had mentioned that test automation is a software development activity, not a software testing activity. As I went to these sessions this week, that stood out to me. I started to wonder if maybe some of the resistance for manual testers to excitedly embrace test automation is around the fact that they’re told that it’s a development activity. For some of those who aren’t really interesting in being developers and want to make sure they get to remain testers, I wondered if there was some sort of, “This is a development activity, then, no thanks.” Now, it may not be up to them.  They may be in an organization where the testers are being told, “We’re going to start doing more automation.” How do you sell automation to testers when it’s becoming more-and-more clear and widely agreed upon that this is a development activity that you need to learn and not this is going to help you be a better tester, but this is something that’s more associated with development?

Bas: I’m not sure I fully agree with what you’re saying there because I don’t think that automation should be forced upon anyone.

Noel: No, I agree. I agree.

Bas: It’s not a matter of forcing people to learn automation, but to me, I prefer to show people the value of automation and what automation can do for them. You’re completely right because that’s what I said and that’s what I wrote and that’s what I’ve been repeating for a while that test automation is software development. Because if you want to create sustainable and maintainable and reliable automation, then you’re going to have to need to apply principles and patterns that are related to software development when you’re creating the automation.

Bas: For example, if you want to create a Selenium-based solution and you want to be able to make it maintainable, reliable, and make sure that it’ll stand the test of time, you can’t get away with writing codes like what’s generated by some sort of record of playback feature. You need to think what that solution is going to do for you, what kind of modules you need, what kind of libraries you need, how they interact with one another, just to make sure that your code is well-structured, that your tests are maintainable and that’s going to be useful to you in the long run.

Noel: Yes.

Bas: That’s why I say, test automation is software development. It doesn’t necessarily mean that you need to be writing code all the time because there are lots of different types of tools and lots of tools that don’t require you to write actual code. There are low-code or no-code solutions out there. But even if you’re creating your automation with those kinds of tools, you still need to be able to think in terms of modules, and encapsulation, and applying object oriented-design principles to make sure if you ever need to change something that every change is going to be made in just one place.

Noel: Right. That’s just what I was going to ask you about, how a lot of low-code or no-code solutions like you just mentioned, that it’s still a good idea for testers to have what was a solid grasp of basic object-oriented programming of concepts. That would be beneficial, especially around maintainability and usability issues that may come up down the road.

When do those issues tend to start to pop up? I’ve heard stories of people who get their automation up and running and they say, “This is great. Everything looks awesome.” Then, at some point, it becomes more difficult to maintain and the automation never really reaches whether it’s that desired automation rate or other ways of measurement that automation is doing what it’s supposed to do.

When do those issues around maintainability and usability start to pop up, and what are some ways of solving those?

Bas: In my experience, pretty soon, because your application under test is constantly evolving. That means your automation is never done as well, just as your application is never done.  You need to be constantly updating your automation to reflect the actual status of your application under test. That means that you’re going to be faced with the limits of the maintainability and the repeatability and the reliability of your automated testing solution pretty quickly as well. That can be anywhere. It depends on the length of your development side, but it can be anywhere from hours until weeks, maybe. If you make the wrong design decisions and you don’t think through the structure or the architecture of your solution before you start automating away, then you’re going to be in trouble real soon.

Bas: You’ll be finding yourself maintaining your automation, spending too much time maintaining your automation. So, when people start asking the question, “What’s this automation bringing us? Is it really living up to the promises of saving us time and making our testing process more efficient?”

Noel: Along those same lines, there’s always a lot of discussion online and a lot of speaking sessions around knowing or how to make smart decisions about what to automate and what not to automate. Like you were just mentioning, as far as trying to automate more than you really need to, what kind of signs might be made available to you to let you know when you are at a good rate of automation? Not that we’re ever done, where you can say, “Well, we’ll never need to automate anything else.” But, at what point can you look and start to see that you’ve automated the right things? You’re not losing value on it because you’ve automated things or spent so much time setting up the automation for things that maybe didn’t need to be automated, but at what point can you start to look at it and feel like you’ve injected automation in all the areas that you needed to?

Bas: That’s the million dollar question.  When do you know you’ve got it right.

Noel: Maybe like you were just saying, as far as the application is constantly evolving and what’s a right amount of automation today is insufficient a year from now or two releases from now or whatever it is.  I was just curious as to if you’ve gotten better at knowing what needs to be automated and what needs to stay the same – what that looks like. How can you see that you’re at a good point and don’t need to add any more at that point in time?

Bas: In general, I’d look at it a little more holistically, so to say. If you’re at the end of the sprint, or whatever kind of development cycle it is that you’re using, if you’ve been able to do all the testing that you wanted to have done and you’ve got a reasonably good well-informed opinion or view on the quality of your application on your test, then I’d say you’ve got the right kind of automation in place or the right amount of automation in place.

I don’t like to talk in terms of “percentage of test cases automated” or “test automation coverage” or whatever, because it’s so easy to get lost in those metrics and forget or lose view of the bigger picture of automation—being there to support your testing activities. Then, it becomes a goal in itself.

That’s often where I see teams. I work with organizations who start to lose sight of the forest for the trees, that is to say. That, to me, an obvious sign of people doing maybe too much automation or spending too much time so focused on their automation and not on making sure that the automation actually helps them perform their testing more effectively.

Noel: I heard a great line from someone this week who was said that testers are inherently creative, and if you give a group of testers a requirement of hitting just a certain percentage,  it’s much easier, hitting that percentage, no matter what it is, than it is to improve end-to-end quality and user experience and all of those kinds of things. Testers are creative enough to find ways to hit that benchmark number because that’s what they’ve been told they’re going to be measured against. But, by doing that, you’re not paying attention to the quality anymore. You’re paying attention to a percentage just for personal gain, not for the gain of your customers.

Bas: Exactly. Automation is a tool. Often, it becomes a goal in itself, as I said, instead of just a means of making your testing a software development process go more efficiently. That’s where I see a lot of teams fail, unfortunately.

Noel: Lastly, you had a line in an article where you said, “If you want to fix the way we train our test automation engineers, lose your fixation on tools and start seeing test automation for what it really is. A craft that requires many more skills than simply a proficiency in tools.”

I really like that line. I’ve discussed that line of internally with some of our product people here about how just becoming better at automation is great, but, at the same time, once you have become pretty proficient with it, and you know how to use whatever tool it is that you’re using, there’s so much other testing that can be made better as well.

Instead of just training customers on how to use a single tool, offer guidance or assistance to your other testers even if they aren’t using that tool. Not to just be responsible for the quality of the testing that that one tool is responsible for. Try sharing of some of the things that you know about exploratory testing, or sharing how to take your automation and turn it into more of a risk-based approach to automation.

I truly enjoyed that line and was curious as to what other opportunities you see for organizations or for even automation engineers themselves to expand their abilities outside of automation, or helping others do the same?

Bas: Basically, why I wrote that line is because in the automation space and especially when it comes to training in automation, there’s such a relentless focus on teaching people how to do stuff with specific tools. What I’d really love to see and I’m starting to see some people thinking along the same lines, is taking a broader view of automation and not just being focused on a specific tool and all the wonderful things that you can do with a specific tool, but broader.

Not just “how to do stuff with tools,” but also to discuss with your peers and with other stakeholders in the software development process about why you’re using automation in the first place, and what to automate and what not to automate with those tools. Once you’ve got the “why” and the “what” down, then it’s much, much easier to find a suitable and efficient solution for the “how,” so to say. Instead of taking a tool and then trying to force every kind of testing left by the tester into that tool or framework or whatever it is you want to call it.

Noel: Absolutely.

Bas: That’s the main thought behind that line you just quoted. One of the other things I try and teach in my courses was what I said at the beginning of this interview is that there’s much more to automation than just functional progression checks—especially, things like test data generation, log analysis, service virtualization. Those are all kinds of ways to apply tools to make the testing process more efficient.

Noel: Right. I saw a speaker who listed a number of ways where automation can benefit manual testers. Like you said, it wasn’t just the regression tests, or the unit tests, it was service virtualization, the time spent doing manual environment builds. Like you just said about test data, the amount of time spent generating and implementing that test data. It was all of these other areas for people who either haven’t seen for themselves the full benefit of automation and don’t see where it’s going to make that huge of a difference.

Bas: Once we get that point across and the risk of people being afraid of losing their jobs to automation is going to be … Once they see that it’s really something that is supposed to support them instead of replace them, hopefully, it’ll spark some more curiosity in people and make them want to learn automation and make them reach out to people who can teach them.

Bas: Then, in turn, those people who could teach them can think of better and more efficient ways of teaching automation in what I believe is the right way.

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

X
X