The reality of testing in DevOps: Your top questions answered


Tricentis Staff

Various contributors

Date: Dec. 08, 2020

DevOps Unbound – a new video/podcast series hosted by and sponsored by Tricentis – has grown by leaps and bounds since its launch this summer. As of Dec. 1, we’ve had 38 guests (not including our two illustrious hosts) over the course of seven episodes and four roundtables.

The series kicked off talking about testing with the dynamic duo of Dr. Grigori Melnik and James Bach. Then we dove into AppSec, SAP, open source, application modernization, mainframes, outsourcing, leading transformations, and usability/accessibility at speed – all in the context of DevOps, of course. You can catch any episode, podcast, or roundtable you missed at the DevOps Unbound portal.

For the latest roundtable, we returned to where we started: testing. This time, the focus was on the reality of testing in DevOps. Testing is routinely cited as the top source of DevOps delays – and defects are still a major issue for 48% of DevOps teams. Are we approaching testing the wrong way? Have we failed to scale and/or accelerate it properly for enterprise DevOps? Do we need to rethink our overall testing objectives, or do we just need to find better ways of achieving them?

We gathered an all-star panel of development, QA, DevOps, and SDET leaders to share a variety of perspectives on the challenges we’re facing in the field today and how we can solve them. The guest list included Miriam Makshanoff (Senior QA Engineer – Calendly), Abel Wang (Principal Cloud Advocate and DevOps Lead – Microsoft), Adam Arakelian (Director of Engineering – Dell ISG), Clint Sprauve (Director of Product Marketing – Tricentis), Hilary Weaver-Robb (Sr. Software Engineer in Test).

DevOps Unbound roundtable cover image
[Watch the roundtable on demand]
It was a fun and lively panel, with lots of great questions and comments from the audience.There was just one problem: there were too many questions to answer in the 60-minute slot. Fortunately, two of our panelists, Miriam Makshanoff and Hilary Weaver-Robb, volunteered to answer them here.

What’s your take on business analysts also being the QA testers?

Miriam Makshanoff: There certainly can be a lot of overlap between business analysts and QA testers. It’s difficult to say that business analysts adamantly shouldn’t be the QA testers because each organization is different and there is not a one-size-fits-all approach. But, I do think that, in a DevOps world, a business analyst taking over that role entirely is going to mean either 1) a definite and unnecessary bottleneck or 2) some level of testing is missing from the process. There’s a reason QA needs to exist in DevOps teams. There needs to be a role that focuses primarily on all of the different testing strategies we should employ. With all other responsibilities a business analyst may have, it’s unlikely they can devote the time they need to for exploratory testing, for performance testing, for automated testing, and so on.

What kind of dev/team testing approaches are best-suited for DevOps? TDD/BDD/other?

Hilary Weaver-Robb: I think all of the above fit well, as long as you’re not in a Waterfall environment, and the developers are adding unit tests in some way, and you’re able to add other automated tests in some way. My team, for instance, does not use TDD or BDD – the developers will write unit tests after they code something, and I’ll start the automation when they start the feature work along with them. By the time we go to production, we’ve got unit tests, we’ve got functional automated tests, and it’s been exploratory tested as well.

With the expectation that DevOps allows you to deploy code daily or even several times a day, when does manual/exploratory testing occur?

Miriam Makshanoff: I can’t speak to the solution all organizations have, but in my experience, the most efficient way to accomplish this is to 1) focus on testing smaller bits of code changes and empowering QA with tools to test these on their own distinct environments (think many different lower-level environments, one for each smaller code change), and 2) utilizing feature flags. Manual testing should occur, to some extent, on each individual code change; perhaps this is not a full regression test but a targeted test occurring around what automated tests have missed. Exploratory testing can then be easily done on a pre-prod or integration level environment with feature flags gating the functionality from breaking an experience on production.

Can different aspects of testing (like functional, performance, security testing etc.) blend into one single piece when organizations put in an “automation first” agenda? Or will hard boundaries between these aspects continue to exist?

Hilary Weaver-Robb: I think you’ll still have hard boundaries. Performance and security testing are specializations in a lot of organizations and may have their own tooling and procedures. I think automation is needed for some aspect of those areas, but you can’t fully automate good security testing, for instance. Like with any testing, I think you’ll still need some manual work.

What ways have you seen testers best add value in a DevOps approach?

Miriam Makshanoff: Testers can add value in a variety of ways in a DevOps environment. Automation coming from development alone can have its downsides, and it can be a lot for developers alone to take on. This is a big part where QA can come in and make a difference, often with a better understanding of the use cases that provide the greatest value when automated. Again, exploratory testing is a big one. Developers often will have difficulty shifting their mindset from being in the code and developing a feature to specifications, to trying to break their feature. Also, advocating for quality across all departments and roles is key. Quality in a product can’t come from QA alone, but when QA has a strong voice in the quality of the product, the processes, and the pain points, everyone wins.

In an Agile/DevOps world, what is the ideal testing resource model?

Hilary Weaver-Robb: I think you’re asking how many testers should be on an Agile/DevOps team here, which is subjective. I’m the only tester on a team of 10 (8 developers, 1 PO, and myself), and I cover 4 projects across those developers. But my developers write unit tests and help with automation etc. so I’m not the only one writing those tests as well as doing exploratory testing. So, like many answers to these questions, it depends 😀

[Watch the complete “Reality of Testing in DevOps” roundtable on demand]


Tricentis Staff

Various contributors

Date: Dec. 08, 2020