SRE + performance engineering: Improve collaboration to release performant applications faster
Shift left with observability to release quality apps faster. Join...
End-to-end testing — which tries to recreate the user experience by testing an application’s entire workflow from beginning to end, including all integrations and dependencies with other systems — is more difficult now than ever.
The challenges with end-to-end testing have increased tremendously over the past several years as enterprise IT has exploded; this has led to an unprecedented number of applications, all of which are highly distributed and interconnected.
But it’s exactly this situation that makes conducting end-to-end testing an imperative for your organization.
[You might like: How to master enterprise end to end testing]
The average organization uses more than 900 applications today, according to MuleSoft’s 2022 Connectivity Benchmark Report, and a single business workflow might touch dozens of these applications via microservices and APIs. To ensure business processes keep running, testers must replicate the work users perform across multiple applications and ensure none of those workflows are impacted when one of those applications is updated.
Ongoing cloud migration further complicates things. Bessemer Venture Partners’ State of the Cloud Report notes that more than 140 public and private cloud companies have now reached a valuation of $1 billion or more. At current growth rates, cloud could penetrate nearly all enterprise software in a few years, according to the report’s authors. That means that tests must function across heterogenous architectures as enterprise cloud migration journeys progress.
To truly protect the user experience as all of these enterprise IT systems evolve at ever-increasing speeds, it’s critical to test the complete end-to-end business process, which may span multiple applications, architectures, and interfaces. That’s because any given part of an application might function differently when working in conjunction with another system than it does when working in isolation — the latter of which is not a real-world scenario. Given this situation, it’s no surprise that leading industry analysts call out end-to-end testing as a critical capability for test automation software.
Despite this growing need, end-to-end testing isn’t easy. Not only do today’s applications evolve at a rapid-fire pace, but they’re often highly connected with other systems in an enterprise IT landscape. These connections create numerous dependencies and, as a result, many points of potential failure to test. It’s all but impossible to carry out extensive end-to-end testing manually, unless you have a lot of time on your hands, but end-to-end test automation has its own challenges.
In fact, Google went so far as to “say no” to conducting more end-to-end testing, citing the relative instability of the test scripts, which require updating every time a connected application gets updated, creating a significant maintenance burden. Despite this challenge, comprehensive end-to-end testing still offers the best solution to protecting the user experience, which should be the ultimate goal of everyone from business analysts to developers and testers. Here’s a look at the top end-to-end testing challenges, as well as how the right processes and testing tools can help you overcome them.
There’s no doubt about it: Successful end-to-end testing is challenging. But it’s also well within reach for modern testing organizations. Success is simply about understanding the challenges, identifying the best ways to overcome them, and introducing the right processes and technology to help put those plans into action.
With that in mind, here’s a look at the top seven end-to-end testing challenges, plus best practices for how to address them.
Proper end-to-end testing will likely include a combination of both enterprise applications (e.g. SAP, Salesforce, Oracle, ServiceNow, etc.) and custom developed, customer-facing applications. Gregor Hohpe of “The Architect Elevator” sums up why testing across disparate, interconnected systems is so difficult:
“Complex, highly interdependent systems tend to have poorly understood failure states and large failure domains: It’s difficult to know what can go wrong; if something does go wrong, it’s difficult to know what actually happened; and if one part breaks, the problems cascade in a domino effect.”
Of course that complexity is exactly what makes end-to-end testing so important, particularly in a DevOps-driven world where speed is a priority and applications change so quickly. To address this challenge and maintain speed, organizations must introduce advanced test automation tools. The testing technology makes a big difference in this regard, as organizations must aim for high levels of automation to maintain the necessary speed and coverage when it comes to testing all of the necessary workflows within an application, including all of the connection points with other applications.
End-to-end tests are flat-out difficult to maintain. That’s because every time a component of the application’s user interface changes, the test needs to get updated along with it. In today’s world of frequent updates, that can mean quite a lot of changes. And if your tests don’t get updated to match UI changes, they may miss critical bugs that degrade the user experience.
One of the best ways to combat these challenges is to prioritize certain workflows over others based on risk, so QA teams aren’t overwhelmed with writing and rewriting end-to-end tests for every possible workflow. While end-to-end testing is absolutely a must for the reasons described above, not every single area of the application requires this level of scrutiny if testers also use lower-level tests, such as unit tests and integration tests, throughout.
Beyond overall maintenance challenges, end-to-end tests tend to be “flaky” because they are meant to mimic real-world scenarios. As a result, factors like network conditions, API failures, and system load can impact the outcomes of these tests.
Additionally, the testing solution used matters, particularly given the level of test automation required for ongoing end-to-end testing at the necessary speed. For example, Selenium is a useful tool, but it can create brittle tests (due to factors like data, context, and ties to external services), which makes Selenium useful, but only if your organization has the resources to maintain and update the test scripts.
Using model-based test automation — for example, with a tool like Tricentis Tosca — can help combat the flaky nature of end-to-end tests. Tosca’s modular test design eliminates the maintenance burden that’s typically so challenging for end-to-end test automation. Its no-code approach means that there’s no scripting knowledge required, so testers can start and quickly scale end-to-end test automation, regardless of their skillset. And because it’s built for both enterprise packaged applications and custom-developed software, it’s ideal for testing end-to-end workflows that span both. To see how it works, watch the webinar: How to master enterprise end-to-end testing: A scalable, codeless approach.
On average, organizations require access to 33 different systems for developing and testing. This means a lot of dependencies on web services and third parties exist throughout the testing process, many of which are likely external systems over which an organization’s QA team has no control. And these connections continue to increase, which only adds to the number of applications to account for during end-to-end testing.
Including those connected systems in end-to-end testing can be challenging when they are changing rapidly themselves. It can also become quite costly depending on the number of systems involved that charge for simulations. The solution to this challenge lies in a service virtualization solution that can mock those external systems for end-to-end testing so that testers don’t have to pay for costly simulations or rely on a live version of the system (which may experience issues that can contribute to test flakiness). Ultimately, this type of solution eliminates many of the factors that are out of testers’ control when it comes to interacting with connected apps.
End-to-end tests are often much slower than other types of testing, which can be a challenge for DevOps-driven teams that want immediate feedback so they can react quickly. Ultimately, the comparatively slower speed of end-to-end tests makes iterative feedback difficult. And this challenge only compounds as the number of end-to-end tests in use increases.
This challenge goes back to two critical solutions: (1) Increasing automation to help maintain speed throughout testing, since automated tests will always run faster than manual tests, and (2) prioritizing which workflows require end-to-end testing and which don’t. The latter of these solutions is especially important, as it’s not realistic for organizations to conduct end-to-end testing for every possible workflow within their applications. Rather, it’s important to identify top workflows within the application (either due to level of usage or business-critical functionality) and prioritize those for end-to-end testing, while supplementing with lower-level tests throughout.
Testers spend the most time simply finding and preparing the right data for tests. And end-to-end tests require a variety of data, regardless of whether they’re manual or automated. For example, testers might need to track down historical data or speak to a subject matter expert to get the right data. In some cases, organizations pull in production data and anonymize it for security purposes, but that approach adds another layer of complexity and can create risks in the case of any kind of audit.
Fortunately, there is another way to speed up this process without adding the complexity created by using production data: introducing a test data management tool to automate the creation of synthetic test data. Testers can run about 80-90% of the necessary tests using this synthetic test data, which mimics production data but doesn’t carry the same risk since it is not actually real user data. And because a test data management tool can automate the creation of this synthetic data, it makes the entire process faster.
All of the complexities involved with end-to-end testing become even more challenging if testing is distributed rather than centralized, Tricentis Founder Wolfgang Platz wrote for “InfoWorld.” With end-to-end testing, the entire team — from business analysts to developers and testers —needs to work together, and this isn’t easy when each set of users has different tools and the information doesn’t carry over from one to the next. When that happens, teams end up having to duplicate work or build custom integrations between the tools. Ultimately, it can lead to misunderstandings and breakdowns in communications.
To deliver a smoother end-to-end testing process, teams should align on a solution that can synchronize information across the variety of technologies each group uses. Doing so should create a single source of truth to eliminate communication issues and make the hand-off from one team to the next more efficient. Additionally, because end-to-end testing connects tests across front-end systems of engagement and back-end systems of record to assess the complete user experience, this type of alignment across teams not only improves the testing process for internal users, but delivers better results across packaged and customer-facing apps.
There’s no getting around it: End-to-end testing is challenging, and the explosion of enterprise IT alongside increasingly rapid speeds of change only complicates it further. However, it’s these exact reasons that make end-to-end testing so important for organizations to conduct regularly.
Specifically, all the dependencies between applications create various points of failure and require more complete testing that mimics real-world scenarios for users. And while organizations won’t realistically be able to apply end-to-end testing to every single workflow within an application, they do need to apply this higher level of testing to highly used and “mission critical” workflows.
The key to delivering on this need successfully (which includes maintaining the necessary speed and overcoming challenges around test maintenance, flakiness, and more) lies in introducing the right technology and processes. Doing this improves maintenance needs, creates less flaky tests, speeds up test setup and feedback times, and helps keep all users aligned, among many other benefits.
Shift left with observability to release quality apps faster. Join...
Explore common regression testing challenges faced by Agile teams –...
Ensure reliable, scalable application performance across on-prem,...
Ensure SAP data accuracy & reliability. Learn risks, key...
Watch this webinar to learn some advanced strategies for...