I may find three bugs, but the way I report information could change the decisions that business people have made, and I think business people will value those who help them make better decisions.

-Pradeep Soundararajan

 

On this episode of the Continuous Testing Live podcast, well-known software testing entrepreneur Pradeep Soundararajan shares his views on the opportunities that AI offers to testers who keep an open mind and a strong focus on user experience. Soundararajan also shares two questions that testers should always ask themselves when working to make informed decisions that impact both their customers and the business.

Noel: So when I think of AI and machine learning in the software testing industry, I envision people using it to take care of things the same way we think about test automation: something we use for functional testing. But when I read a blog you wrote, titled, “AI-driven Functional Testing,” it got into how we might also be able to use AI for exploratory testing, which I had not seen anyone write about or talk about before.

I’m really curious about the word “driven” in that title and wanted to get you to maybe expand a bit on how AI can be used to not just “aid” your functional test efforts, but really “drive” them. That’s a lot of power you’re giving to AI.

Pradeep: Right. When we started to think about AI, when we started to read a little bit more about AI, and we started to toy around with the idea of bots and AI—as you said we are still at the early days as an industry. I’m not saying “we” as in just the company that I’ve co-founded, I’m talking about us as an industry. We’re still trying to think about where can we fit in AI, or does AI really fit in? There are a lot of people talking about AI, so for us, and for me, I’ve been an exploratory tester, and I have done a lot of work around automation.

When I say “around”, it’s been around, not exactly hands-on automation, so I’ve built the architecture, the test data, the systems that are supposed to interact, and things like that, and I have seen customers quite excited about automation, and I believe it is natural instinct for people to think that AI should help us do better automation.

I built a company which, today, has about 150 testers. It’s called Moolya, and we have a great emphasis on exploratory testing coupled with automation. One of the challenges for us as an industry has been to scale good-value testing that comes out of human testers, and I think a lot of tools and technologies have been built for aiding better automation, and there’s been less innovation to aid the tester.

We started to think about an idea of building something like an Iron Man suit for a tester. That’s when I was thinking about writing this blog. I said, “Hey, we are at the early stages, and we shouldn’t be making a mistake that AI is only for automation.” We should also be thinking about what can it do for exploratory testers, and that’s the reason why I had to mention those things.

Noel: I totally agree. I was chatting with someone here at Tricentis recently, and we were discussing how one of our goals is very similar, in that we want people to know how to use our product. But, at the same time, providing some guidance and advice around all the other ways you can test software just ends up making your customers better testers overall, and not just experts in a single thing. So, having that experience of understanding how to use exploratory testing, how to do automation, and then eventually I think we’re going to have people needing to learn how to use AI in this industry as well.

Pradeep: Yeah, yeah.

Noel: There was a line from this blog that caught my eye when you tweeted it out. It said, “I think we are using AI as an excuse to wake up and solve testing problems that need to be solved.” There’s so much to discuss and unpack in that. It really could be the basis of our entire conversation today. I’m curious what some of those problems are. And, are they problems that testers have tried to solve in the past and were unsuccessful, or are they problems that we’re just now realizing have been around a while?

Pradeep: Yeah, so a lot of these problems that I’m going to talk about, are the problems that have been around, and also have been attempted to solve. For example, test coverage. This is the foundation of all the testing that I do, and in terms of test coverage, while there are enough guiding heuristics to record test coverage, I see the large amounts of testers having no clue how to report test coverage. I think when we started to build AI systems that aid in our testing, I found that the foundation of reporting the test coverage was actually missing. You know, when I say “missing,” there are a lot of things in the theory that are … things in the pockets of practices across several companies. I see that even within the same large organization there are some teams who are fantastic, and there are some teams who don’t even know what’s happening in their own company. This is primarily happening because there has not been a system that has actually shown people that this is the way you could report your test coverage.

So even before … for us, when we want to build something on AI, in AppAchhi, we kept asking ourselves, “Are we building AI because for the sake of building AI, that everybody is building AI, or are we really, really solving the testing problem?” And, for the understanding that I’ve had in the testing space, I think the fundamental testing problems have not been solved, and that’s what people like James Bach and Michael Bolton keep talking about. The fundamentals of testing. So for me, it was like I needed to find a way to solve the problem and scale the problem.

It’s not that solutions don’t exist. Solutions exist, but not in a scalable fashion. Somehow, in our industry, the intellectuals of our industry, are afraid of scale. I want to challenge that. I’m sure Tricentis is also challenging that. I think we as a community need to come together to challenge that and say, “We have to find good ways to scale high-value testing through platforms, through tools, and through technologies,” and then aid the human, because the human is, today, struggling.

There are a lot of testers who want to do good, but they don’t have supportive frameworks and tools to scale their approach of testing, so that’s just a short overview of why I think I wrote that tweet.

Noel: I totally agree, and that makes me think, too, that there are a million different versions of “Getting started with … ” and then inserting whether it’s a methodology, a practice, or a certain tool. And, it gets people off and running, but like you said, just because you get started with automation, or get started with AI in this case, it doesn’t mean that you’re just going to be able to ride that wave for very long. There are going to be issues that pop up as you attempt to expand your use of that, again, practice, methodology, or tool. Those intros, and those “Getting started with … ” courses and walkthroughs don’t really get you past that point. A lot of times, the goal is just to get you started with something, but testers then run into problems—and then sometimes run out of places to find those next answers.

Pradeep: Yes, and I’d like to give another example. So if there are about five people in a team are testing a mobile app, AI could come in handy by reporting to all the testers saying, “These are the screens that have been maximum-covered. But these screens have not been focused on.” That’s a valuable insight which we are not getting today so that the testers could go on and improve the coverage by focusing on screens that other testers have not been focusing on. So I think this is one kind of AI system that needs to be built by the world, to aid testers to focus on more deeper areas than what they’ve been focusing on.

Noel: You mentioned, too, that everybody’s talking about AI right now. I looked at the 2018 STAREAST testing conference agenda and noticed that there were 13 sessions that mention AI or machine learning in the title. That’s…a lot for a 2-day conference, and I doubt those are the only ones that bring it up. I’m sure there are plenty of other sessions that just didn’t put those keywords in the title but will definitely address it. It’s definitely on everyone’s mind.

Pradeep: Yeah, yeah.

Noel: You brought up James Bach and Michael Bolton, and in this blog, you talk about the concept of an “exploratory bot.” I’ve been a follower of James and Michael for a long time, and I’ve seen their heated and impassioned arguments and statements that come out of the some who believe all testing can be automated, and not require manual efforts by humans. This made me wonder, how in the world are we going to have exploratory bots, when exploratory testing—to date(!)—has been a wholly manual effort. But, like you’re saying, are we short-sighted if we believe “AI will only be able to be used for this or that type of testing,” and not open ourselves to the possibility that there may be other areas like exploratory testing, that we need to start thinking about.

Pradeep: Yes, so I’ve been following both of them as well, and I’ve also been coached by them. I, once in a while, become a very good student, and once in a while become a bad student.

Noel: I’m sure they love both!

Pradeep: Yeah, they do love both. They find reasons to pull me aside and actually talk to me about things. So, the concept of an exploratory bot is that if exploratory testing is about simultaneous learning, test design, and test execution, and AI is expected to learn as it is doing certain operations, and then use the learning to design the next step of tests, then that’s exploratory testing. I think we are seeing those possibilities with AI, and I just had a demo of one of our product versions last week, and I saw that there was a piece of code that learned the changes in the screen, and it was trying to design a few new sets of tests, so that’s the exciting part, and that what’s I refer to as exploratory bot.

Now, I try to talk about those things, and the one thing I love about the community is that when they challenge these ideas, my own ideas are refined. So, sometimes I put out these ideas to be challenged so that I can refine my idea and I can come up with something better than what I initially thought.

Noel: Absolutely. This reminds me of something I’ll speak to in a second here, but another line in your blog pondered if the “best” use of AI in testing is to create an exploratory tester out of it, and not a software development engineer in test, or SDET. As we start to see all of these new writings come out about AI, and what role will it play, and where it’s going to appear first, I’d love to know what you believe is the best use. As someone who has an exploratory testing background, do you have a preference out of one of those two, or is it one of those things where it’s like automation, and the best use for it is going to be what directly benefits a specific customer’s specific business needs at that point when they look for a tool?

Pradeep: Okay, great question. So with the background of exploratory testing, it is easy to think that I could double up a bias towards exploratory testing, so that’s the reason why I surround myself with my colleagues, who give me the contra-thoughts, and help me constantly un-bias myself. When I wrote this blog, a bunch of my colleagues went through it and they really liked it because I had a balanced set of thoughts and not a bias towards exploratory, but if you look at the whole essence of that blog post that I put out, I asked some fundamental questions, saying, “Whether it’s AI or not, are we really solving testing problems?”

Now I think what has happened is that with a lot of tools and everything coming into the picture, the focus to solve testing problems has gone away. My friend and my colleague, Rahul Verma, and I were reviewing a test automation framework recently, just as recent as last week, and he and I keep asking this fundamental question of, “Yes, I see the framework. Where is the test?” Because the whole framework is built for the test, and a bunch of SDETs that we are producing in the industry in the day, have a passion to write code, but they’ve spent very little time understanding what a test is. I think, if people continue to build automation without understanding what the tests really mean, we will face challenges from fundamental problems not being solved.

So I’m sure, for Tricentis, and the same case for us, the biggest business has come from those who have tried and who have failed. So, we work with customers who have already invested a lot of money in testing, and their problem is not solved, and then we go and look at their work. There is everything in the book that should be there. There is DevOps, there is Agile, there is Scrum, there is automation, there is a CI/CD pipeline, there is continuous testing, everything, and then they are saying, “Hey we are facing these issues.”

Why would you face those issues if everything you did, according to you, worked? That’s where we start our work from, and I think there is a lot of human potential building today, and it’s our responsibility to declutter things for our customers, and give them a clear view of saying, “These are your testing problems, and these are the solutions towards the testing problems.” That’s the way I’m thinking.

Noel: Yeah, again, totally agree. We have a debate coming up that we’re going to be putting out as a webinar, and it’s around this debate around this approach to testing. Whether or not it should be the testers doing testing, or whether it should be your SDET roles. Having your developers and engineers writing your automation tests, and performing your tests instead of testers. You look at these companies like Facebook, and Google, who have these … they seem so foreign or alien approaches to testing and QA. Claiming to have no one solely dedicated to QA, and that the testing is all owned by the developers and the product team. And on the opposite of that, you have other companies where the testers do everything.

Noel: There are so many different approaches you can attempt, and some of them seem impossible, and some of them seem like something you should go for. Then you’re going to have people coming out and saying they’re using AI for things that you never thought possible. There’s just a world of options, and sometimes it can be a little confusing as to what’s going to work for you, based on whether it’s the industry you’re looking at, or if it’s the company size. It can be hard to look at a Facebook or a Google and say “Oh, we can do that,” if you’re a company of 50 people. That might not seem like it works, but then you’re going to have others who see that, see the success they have and want to go after it.

Pradeep: Yeah, and I’m glad you mentioned this. I’d like to give you a few perspectives, being in India, and having been a consultant, and actually touring services companies. What I think is a lot of companies claim this is the way they test software. Yes, that’s the way they test software, but what they are not actually telling, is that they do outsource a bunch of “checking” jobs to some services companies in India. I have seen books published that cover how Google tests software,” or how Microsoft tests software. I was consulting for a team whose job was to open the top 5,000 links of a particular browser, and report saying “Yes, every page element loads.” But, when they publish a book or a blog of, “This is how we test software,” they don’t tell that checking is a part of testing, and that is outsourced to a company in India or some other place. But they project an image saying everything is automated, and there are no humans. Yes, there are no “visible” humans, is what I would say.

Noel: Right. Well I had one last question for you, and we’ve kind of touched on this a couple of times, but you mention in this blog, two bottom line questions in testing. One, “Are we doing better test coverage?” and, two, “Are we helping people make better decisions?” I really thought those were both hugely important questions. And, like you said, sometimes people get away from those and fail to keep them at the forefront of the decisions they make, or the criteria that go into the decisions they make. I’d love for you to give some ways to keep those top-of-mind when there are so many other interruptions or other voices or perspectives that want to steer testing’s role in other directions. How can testers avoid prioritizing other things that fall outside of those two areas? Do you have any advice for how testers can make sure they’re keeping those two things top-of-mind?

Pradeep: Yes, rather than advice, I can just share a quick view of the journey that I’ve gone through. I was super-excited to find bugs, and I thought, as a tester, I need to find more and more bugs. Yes, I need to find more and more bugs, but at some point, I realized, “What’s the purpose of this whole thing?” The purpose … I think the testers today, a lot of testers come into existence because somebody who is a decision maker wants information to make decisions. And that, today, is a product owner. Now, working with a lot of startups over here, I see that the product owners need a lot of information that they’re not getting, and it is the engineering heads who are max-utilizing the value of QA and testing, and what gets reported back is also some kind of coverage.

Now, for me, in my journey, I graduated from thinking, “Bugs are everything,” to, “Coverage is everything,” and my whole perspective of value changed.

So today, I may find three bugs, but the way I report information could change the decisions that the business people have made, and I think business people will value people who give them information that helps them make better decisions.

This is something that a lot of young testers miss. That’s because finding a bug gives people an adrenaline rush, and that is more exciting for them that looking at the test coverage. But, I think that’s where people with a little bit of gray hair need to be paired up with those whose blood is pumping faster. This approach can be given to them, saying, “Yes, you find more bugs, and I will focus on the test coverage.” That’s something that I’ve been advocating to my customers, and so that’s one part of increasing test coverage.

The second thing is there was a lot of emphasis on bug advocacy when I was going through classes from James, Michael, and Cem Kaner. But there also needs to be a business advisory. I think a lot of testers are not exposed to the business side of things, and hence, they are not in a position to advise the people of the business. But I think if you don’t change that, the perspective of business people, they way they look at QA and the way they look at the testing will continue to be the way it is as of today. I also see some of the positives, like that there are a lot more testers becoming entrepreneurs today.

If you’re going to STAREAST there’s a possibility that you would be meeting Jason Arbon, who is doing this company called Appdiff, which is also in the space of AI. He has been a hands-on tester, and he has now become an entrepreneur. There is Rahul Verma in India. There are a lot of other testers becoming entrepreneurs today, which is definitely going to change the landscape. I’m really hoping that in this lifetime of mine I will see a good change in the testing space, and its life worth living for me.

Noel: That’s awesome. That’s great. I love that, and I’ve got Jason Arbon on my shortlist of people to reach out here, too, for interviews like this. He’s got a session at STAREAST titled “Machine Learning Heralds the End of Selenium,” which I almost expect people to be at the door with pitchforks and torches to protest a statement that bold. I can’t wait to see what he’s going to be talking about in that session.

Pradeep: Yes, yes. Absolutely it’s going to be so. We exchange some pretty good thoughts and I’m quite excited for the testing community, and the testers who are turning to entrepreneurs and succeeding really well.

Noel: Yeah. I hope to see that, too. Thanks so much for speaking with me, today, Pradeep. I really appreciate it.

Pradeep: Okay, thank you very much, and for Tricentis for being quite open-hearted to include people like me into this.

Noel: Absolutely.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

X
X