This article by Wolfgang Platz was originally published on thenewstack.io
The speed of digital transformation across the IT industry is already staggering — and it’s only going to increase. To put this into some very concrete terms, consider that:
- There are 7.7 billion people in the world.
- 5 billion have regular access to a toilet.
- 5 billion own a mobile phone.
All of a sudden, a huge number of people jumped from a very provincial lifestyle straight into the digital age — creating tremendous demand for more and more innovative software.
However, the traditional ways of developing and delivering software have been inadequate for meeting this new demand. Not long ago, most companies were releasing software annually or bi-annually. Now, iterations commonly last two weeks or less. While delivery cycle time is decreasing, the technical complexity required to deliver a positive user experience and maintain a competitive edge is increasing.
For software testing, this brings us to an inflection point. In most organizations, testers were already racing to keep pace when delivery cycles were longer and application complexity was lower. Every quarter or so, the development team would pass a release candidate over the wall to QA, which would then scramble to validate it as thoroughly as possible in the allotted time — largely with manual testing.
Now, digital-transformation initiatives, such as Agile and DevOps, are pushing traditional testing methods to their breaking point. As mentioned above, organizations are releasing much more frequently — varying from a monthly cadence to multiple times per hour. And as organizations increasingly edge towards continuous delivery (CD) with automated delivery pipelines, intermediary quality gates and the ultimate go/no-go decisions will all hinge on test results.
In this post, we explore ways to gauge whether your organization’s testing processes are up to the task in order to support your DevOps’ CD goals.
We have a problem (two, actually)
In most organizations, testing delays application delivery while providing limited insight into whether the applications being tested are meeting stakeholders’ expectations. Among other things, the process is just not fast enough to help teams find and fix defects when it’s optimal to do so. And it’s reporting on low-level test failures (e.g., 78% of our tests passed) rather than providing the business-focused perspective needed to make fast release decisions (e.g., Only 38% of our business risk was tested…and 25% of that didn’t work properly).
Let’s take a quick look at each of these problems in turn.Sponsor Note
Tricentis is recognized for reinventing software testing for DevOps and digital transformation so enterprises can accelerate software delivery speed, improve cost efficiency, and reduce business risk.
The speed problem
DevOps is all about removing the barriers to delivering innovative software faster. Yet, as other aspects of the delivery process are streamlined and accelerated, testing consistently emerges as the greatest limiting factor.
A recent GitLabs survey that targeted developers and engineers found that testing is responsible for more delays than any other part of the development process.
Where in the development process do you encounter the most delays?
The same conclusion was reached by a DevOps Review survey that polled a broader set of IT leaders across organizations practicing DevOps. Again, testing was cited as the number-one source of hold-ups in the software delivery process. In fact, testing “won” by a rather wide margin here. While 63% reported testing was a major source of delays, the second-highest source of delays (planning) was cited by only 32% of the respondents.
Where are the main hold-ups in the software production process?
Why is testing such a formidable bottleneck? That could be the topic of an entire book. For now, let’s summarize some key points:
- The vast majority of testing (over 80%) is still performed manually — and even more at large enterprise organizations, according to a Capgemini, Sogeti and HPE report, “World Quality Report 2018-2019.”
- Approximately 67% of the test cases being built, maintained, and executed are redundant and add no value to the testing effort, according to Tricentis research conducted from 2015-2018 at Global 2000 companies — primarily across finance, insurance, telecom, retail and energy sectors.
- At the organizations that have significant test automation, testers spend 17% of their time dealing with false positives and another 14% on additional test maintenance tasks, according to Tricentis’ research data.
- Over half of testers spend 5-15 hours per week dealing with test data (average wait time for test data equals two weeks), according to an SDLC Partners Study.
- A total of 84% of testers are routinely delayed by limited test-environment access (average wait time for test environments equals 32 days), according to the Delphix study, “The State of Test Data Management.”
- The average regression test suite takes 16.5 days to execute, but the average Agile sprint is two weeks, from start to finish — this includes planning, implementation and testing, according to the Tricentis research report.
- The average application under test now interacts with 52 dependent systems, meaning that a single end-to-end transaction could cross everything from microservices and APIs, to a variety of mobile and browser interfaces, to packaged apps (SAP, Salesforce, Oracle, ServiceNow…), to custom/legacy applications, to mainframes, according to the O’Reilly-published book “Service Virtualization” by Bas Dijkstra.
The software testing process wasn’t working perfectly even before the advent of Agile and DevOps. Now, we’re asking teams to “just speed it up” at the same time that modern application architectures are making testing even more complex. Given this context, it’s hardly surprising that speed expectations aren’t being met.
The insight problem
Only 9% of companies perform formal risk assessments on their requirements/user stories. Most attempt to cover their top risks intuitively, and this results in an average business risk coverage of 40%, according to the Tricentis research data. Would you feel comfortable driving a race car with blinders on? That’s essentially what you’re doing if you’re rapidly delivering software with insight into less than half of your total business risk.
Moreover, most organizations can’t immediately differentiate between a test failure for a trivial issue and a business-critical failure that must be addressed immediately. Most test results look something like this:
What insight does that really provide? It’s clear that…
- There’s a total of 53,274 test cases.
- Almost 80% of those tests (42,278) passed.
- Over 19% of them failed.
- About 1% did not execute.
Maybe the test failures are related to some trivial functionality. Maybe they stem from the most critical functionality: the “engine” of your system. Or, maybe the most critical functionality was not even tested at all. Trying to track down this information would require tons of manual investigative work that yields delayed, often-inaccurate answers.
Today’s go/no-go decisions need to be made rapidly — even automatically and instantaneously. Test results that focus on the number of test cases leave you with a huge blind spot that becomes absolutely critical — and incredibly dangerous — as we described above.
Closing the gap
How do you evolve from the slow, burdensome testing that delivers questionable results to the lean, streamlined testing that provides the fast feedback required to accelerate innovation and delivery? That’s what I aim to explain in my new book, “Enterprise Continuous Testing: Transforming Testing for Agile and DevOps.” This book targets senior quality managers and business executives who need to achieve the optimal balance between speed and quality when delivering the software that drives the modern digital business. It also provides a roadmap for how to accelerate delivery with high confidence and low business risk.