This article, authored by Wayne Ariola, was originally published on SDTimes.com.
Almost exactly one year ago, Forrester confidently predicted that this would be “the year of Enterprise DevOps.” The blog, authored by the late Robert Stroud, began:
Earlier in the year, we declared this to be the year of DevOps and I am pleased to say, “We were right!” With our data confirming that 50% of organizations are implementing DevOps, DevOps has reached “Escape Velocity.” The questions and discussions with clients have shifted from “What is DevOps?” to “How do I implement at scale?”
Continuous Testing is not far behind. In early 2014, SD Times (in the very first article in the publication’s Continuous Testing category) proclaimed “Forget ‘Continuous Integration’—the buzzword is now ‘Continuous Testing’.” At the time, the concept of Continuous Testing initially seemed about as far-fetched as a Silicon Valley snowstorm in July to most enterprise organizations—where pockets of Agile and DevOps were just springing up among teams working on “systems of engagement.”
But since 2014, the world has changed. As Forrester predicted, the vast majority of organizations are now actively practicing and scaling DevOps. And the larger focus on Digital Disruption means that it’s now impacting all IT-related operations: systems of record as well as systems of engagement.
When ExxonMobil QA manager Ann Lewis so memorably asked “Is it all just a bunch of hype, really?” at Accelerate, the clear consensus was a resounding “no.” Digital Transformation, DevOps and Continuous Testing have already gotten real for the 2,000+ conference attendees, largely comprised of QA leaders across Global 2000 organizations. It had become so real that their employers cleared their schedules for a week and sent them to Vienna to learn what’s really needed to achieve sustainable Continuous Testing for DevOps…in an enterprise environment.
Here’s are some of the key lessons learned—shared by leading testing professionals that have made Continuous Testing a reality in their organization…
“Test Data is a Pain the Ass”
Renee Tillet, Manager of DevOps Verify at Duke Energy, offered her perspective on one of the most underestimated pains of Continuous Testing: Test Data Management. Renee asserted:
“If you’re doing test automation, what’s the biggest pain in your ass? It’s test data. We would be in the middle of our sprint—the developers are done, the testers are getting ready to test, and guess what? The tester has no test data. Not only does he not have test data, but he doesn’t have time to go create that test data now. It’s too late.
By the time you get to that user story, your definition of ready should include not just what the developer needs, but also the test data you need to verify it. The test plan needs to be ready, and the data needs to be in the environment—or we don’t accept that story into the sprint.
Initially, we would create parameterized test cases, we’d put data in them, and they would run in the Dev environment. But then we’d try to run them over in the test environment, which was the next higher environment, and they would fail because the data was different. So, we came up with a data strategy that allowed us to use the same test data in all the environments.”
Number of Test Cases: Less is More
Numerous experts shared that a high number of test cases is no longer something to be worn as a badge of honor. It doesn’t help provide the fast feedback that the team expects.
Andreas Aigner, Head of Service and Security Management at the Linde Group, explained:
“We have a lot of examples in the past where we, to certain degree, have been proud of about 3,000 test cases. They ran all the year without any defects. I said, ‘Is that successful? Does that make sense? Don’t you think you have burned resources?’ I have spent a lot of time making it clear, ‘You should not be proud of 3,000 test cases that you have created, and you should not be proud to have an application enabled with test automation, where we all know in one year we will replace it.’ That does not make sense. You have to search for high-value test automation, and you have to focus on the business risks at the end of the day.”
Martin Zurl, SPAR ICS, added:
“We rely on risk-based testing to prioritize our test cases. We need to understand the way and thinking of the customers and test the most important features, not even every feature because we need to speed up in our automation. We need to get to result very fast, so we go to the main road the customer follows. When there’s more time, we can think about the rest to be tested.”
Democratize Test Automation
Although test automation alone is not sufficient for Continuous Testing, it’s a critical component. Leaders across companies agreed that making test automation accessible and enabling business experts to control their own automation is key for achieving high levels of test automation.
Amber Woods, VP of IT Enterprise Applications and Platforms at Tyson Foods, talked about democratizing test automation:
“We’ve had other scripting tools that we’ve used for automation in the past. And those have not really been adopted well because they really haven’t gotten the adoption within each of the teams. We’ve had success democratizing citizen data scientists or citizen integrators with snapLogic. Now we’re taking that same approach to test automation, using model-based test automation. This allows our business analyst to enter into automation in an easy, fast way that will get us away from what we had before, which was a lot of scripting. Our goal is to get heavy, heavy adoption in the automation space.
Say you’ve got Team A over here, and the Team B over there. Team B’s leaving at a decent hour of the night, and Team A is working all night. Team A asks, “Why are you leaving so early? Don’t you have all that testing to do?” Team B responds, “Well I’ve got all my testing automated. I’m going to push a button and I’m going go home for the night. That gets teams to adopt test automation.”
Ann Lewis, Quality Manager at ExxonMobil spoke to the power of enabling more team members to “control their own automation”:
“What warmed my heart is that probably six months, after we really started getting into automation, one of the business COE managers called me up and said ‘Wow, where did this come from? I want to put it the hands of all of my business process experts. For the first time, we can control our own automation. Automation helps us ensure that, over and over again, business critical functionality works after each application change.’ That actually started a competition amongst different business units—everybody wanted to get on that bandwagon.”
80% API Testing
Sreeja Nair, Product Line Manager at EdgeVerve, explained why their journey to Continuous Testing included API testing as well as test automation:
“UI testing is slow—for example, it can take 3 minutes to automate an end-to-end banking flow at the UI level. And if the UI is not ready or down, you can’t test at all. Is that a good way to do a test? Obviously not. We found that the best way to address our problem is to attack the layer below UI presentation layer: the business layer. We realized we could cover 80% of our functionality if we test at the business layer through APIs. We decided to change our tests from UI-oriented design to API based design. After we first define our test model, we find out which APIs need to be called and then chain the APIs together according to the component model we have designed. Testing a single API is not API testing. If you have a business scenario to test, you need to integrate your APIs to create realistic service-level integration tests. “
In-Sprint Testing Can’t Focus (Exclusively) on New Tests
Aaron Carmack, Automation Architect and Product Owner at Worldpay, explained that one of their keys to advancing from “test automation zero to Continuous Testing hero” was recognizing that updating test cases as your application evolves is just as important as adding new ones:
“Our QA teams sit down with the Dev team and the product owners as user stories are created. That way, we know what these stories involve and what test cases will need to be updated later on. And then, once the sprint starts, we start updating those test cases, creating the new critical scenarios that we need, and updating the existing test that we believe will be impacted by the new user stories. We’re updating tests, creating new tests, and then executing tests based on the new user stories—all within the sprint. Also, when we execute the full regression suite, we identify the failures and commit to updating them within the sprint. That way, false positives don’t undermine our CI/CD process.”