Test Automation Challenges and Strategies

From the QA perspective, test automation involves creating a test artifact that automatically exercises a use case or requirement in isolation. It’s designed to repeatedly confirm whether actual outcomes at specific check points match the expected outcomes (according to the requirement or user story). QA-level test automation is traditionally performed at the UI level through scripting and/or record and playback tools.

Most UI-level tests need to be constantly updated to avoid false positives as the application evolves. Thus, if applications change frequently, test maintenance becomes a time-consuming chore. Once the maintenance burden takes its toll, teams often abandon test automation efforts and revert to manual testing. As a result of these difficulties with brittle, high-maintenance tests, few test teams have been able to achieve satisfactory levels of test automation—even in waterfall processes.

What Is the Average Rate of Software Test Automation?

That depends who you ask. Here are three different perspectives…

The World Quality Report (by Capgemini, Micro Focus, and Sogeti) is based on a sample of 1,600 interviews from enterprises with 1,000+ employees (25%), 5,000+ employees (3%), and 10,000+ employees (41%). They report:

“It is worrying that our survey results show automation is currently under-exploited in QA and testing. While we see a rise in the number of organizations benefitting from automation, the value they generate is largely unchanged and the level of test automation is still low (below 20%).”

The Sauce Labs Testing trends report surveyed a different demographic: 732 professionals responsible for testing web and mobile applications — 67% of which deploy hourly (10%), daily (34%), or weekly (23%). This sample yielded higher test automation rates:

“In this year’s survey, 42% cited their testing efforts as “mostly” or “entirely” manual, many more than the 32% that said they were “mostly” or “entirely” automated. While there remains room for improvement, there is good news that progress is being made in this area. The number of development teams whose testing is “mostly” or “entirely” automated is up to almost 1 in 3 (32%), from only 1 in 4 (26%) last year.”

QASymphony and Techwell found that the rate of test automation is 29% at mid-size companies and 17% at the enterprise level. They offer a detailed breakdown of the types of tests being automated:

“Regression tests are by far the most frequently automated test type (86%), followed by repeated execution (46%), load (29%), performance (29%), and cross-browser (29%) testing. Most testing (63%) is still being done solely by QA, but we are seeing a rise in collaborative testing teams that include both QA and developers (27%).”

At Tricentis, our experience reviewing actual client test portfolios has found an average test automation rate of 11% (based primarily on large enterprises).

Improve Your Test Automation Rates at Our Bootcamp Webinar Series

High Maintenance

Traditional script-based automated tests need frequent updating to keep pace with highly-dynamic, accelerated release processes. This results in an overwhelming amount of test automation false positives that require burdensome maintenance and/or cause automation efforts to be abandoned.

Slow Execution Time

Traditional tests are time-consuming to execute, so it is not practical to run a meaningful regression test suite on each build. This means the team lacks instant feedback on whether their changes impact the existing user—undermining the goals of CI.

Frequent Failure

With today’s complex, interconnected applications, test environment inconsistencies commonly impede test automation efforts and result in false positives. Again, this requires burdensome follow-up and/or causes automation efforts to be abandoned.

What’s Driving the Need for Increased Test Automation?

Ironically, the same trends that are driving the need for increased test automation are also making test automation more challenging

Application architectures are increasingly more distributed and complex, embracing cloud, APIs, microservices, etc. and creating virtually endless combinations of different protocols and technologies within a single business transaction.

Thanks to Agile, DevOps, and Continuous Deliver, many applications are now released anywhere from every 2 weeks to thousands of times a day.

Now that software is the primary interface to the business, an application failure is a business failure—and even a seemingly minor glitch can have severe repercussions if it impacts the user experience.

Is Selenium the Best Tool for Test Automation?

That depends on your applications and your expertise (business domain expert or programming expert). Selenium is well-suited for web UI development teams where testing is conducted by developers or testers who are well-versed in a programming/scripting language. Yet, Selenium is not a test automation panacea. An enterprise-level quality strategy also requires the coordinated orchestration of a broader set of software testing practices, as well as the ability to exercise end-to-end transactions beyond the browser.

Tricentis Tosca

Micro Focus / HP UFT

CA technologies

Microsoft Visual Studio


IBM Rational





Gartner MQ for Software Test Automation

Forrester Wave for Functional Test Automation

What’s the Difference Between Test Automation & Continuous Testing?

Continuous testing is the process of executing automated tests as part of the software delivery pipeline in order to obtain feedback on the business risks associated with a software release candidate as rapidly as possible.

Test automation is designed to produce a set of pass/fail data points correlated to user stories or application requirements. Continuous testing, on the other hand, focuses on business risk and providing insight on whether the software can be released. To achieve this shift, we need to stop asking “are we done testing” and instead concentrate on “does the release candidate have an acceptable level of business risk?”

With Continuous Testing, test automation is extended and supported by practices such as risk-based test case design and prioritization, stateful test data management, test-driven service virtualization, and seamless integration into the DevOps toolchain.