Blog

Analyzing load test results —why analysis matters

Performance testing

Learn more about continuous performance testing and how to deliver performance at scale.

Author:

Guest Contributors

Date: Nov. 05, 2020

By Deb Cobb, Enterprise Product Manager

The objective of performance testing is to identify and expose the most likely sources of risk and deliver the actionable insight required to mitigate the code and performance bottlenecks that lead to application downtime and outages. When done correctly, load tests reveal system errors and limitations through test results so they can be remediated.

The accurate interpretation of performance test data is an incredibly critical role. Analysis results inform the tactical and strategic decisions that drive the business. A premature decision to go live with a risky website or application can have profound, and unwelcome, ramifications that impact revenue, brand, market perception, and the user experience. When the application doesn’t either meet user expectations or deliver the value users expect, customers will take their business elsewhere. Loss of revenue is why the analysis of performance test data is so essential. Results analysis is a predictive tool that describes the expected behavior of the application under production load stressors and provides the foundation for smart decision-making.

Because business success can depend on accurate load test analysis and conclusions, testers will benefit from a logical framework to approach load test analysis. Framework components include these steps:

  • Define criteria for success/failure
  • Establish objectives
  • Analyze test results
  • A report based on stakeholder preference

A logical framework for approaching load test analysis

Load testing results analysis can be a thankless and stressful job. It not only requires a comprehensive knowledge of load testing design, an understanding of the technical layers involved in the application, and familiarity with the application’s architecture, but it also demands the ability to make accurate conclusions about the data and communicate them to stakeholders — all within a short time window. Skill and experience enable testers to ask the right questions, conduct the proper tests, make the correct conclusions, and win the trust of stakeholders who own business decisions based on those conclusions.

Most testers will likely agree that breaking an app under heavy load is straightforward, but finding the problem based on automatically generated load testing reports is surprisingly challenging. For this reason, it’s best to follow a best practices paradigm of performance testing.

Define critical goals

Defining criteria for success and failure is a prerequisite to any test strategy. Before testing an application, establish acceptable thresholds for robustness and performance. In most cases, these criteria are defined regarding average and maximum response times per page, maximum error rate, or the number of requests per second.

Establish test objectives

Performance tests can evaluate application robustness and performance, hardware, and bandwidth capacities. Clearly defined requirements set precise test objectives that assess application stability. When the simulated load remains constant over an extended period, load tests reveal whether the application supports the anticipated number of simultaneous users and the desired response time for critical pages. The server is considered overloaded if these usage figures regularly exceed 90%.

Stress tests can validate hardware configurations or the number of simultaneous users that the application can handle while maintaining acceptable response times. They also provide insight into server behavior under high load (e.g., does it crash?) and the load threshold above which the server begins to generate errors and refuse connections.

Last, performance tests validate performance variations after an application or infrastructure update and assess whether implemented upgrades resulted in real performance gains and which (if any) pages experienced performance degradation.

Analyze performance test results

Analysis begins with the testing tool. The primary goal of any performance testing tool is to provide a clear status on application performance, and help testers derive insight from the data. Results and reports should be easy to customize, intuitive, and focus on three central themes:

  • Response times
  • Availability
  • Scalability

Ultimately, the reports that the tool generates need to demonstrate whether the performance requirements identified during the performance strategy phase are validated.

Understanding test results context

Testers should have access to granular statistics for different application pages to identify potential performance bottlenecks. Then, through comparison, testers can analyze results from different runs of the same (or different) scenario(s). Typical steps involved in a testing phase include:

  • Running a particular scenario
  • Analyzing results and identifying “slow” pages
  • Changing and improving pages, the code called by those pages
  • Re-running the scenario
  • Comparing results before and after improvement

Load tests reveal trends

Trends are essential indicators of the progress (or regression) of daily performance testing of newly released versions of application components. Spotting trends in performance regression more quickly and pinpointing which changes introduced the regression leads to easier and less expensive resolution. Patterns also give testers an immediate, clear idea of the general quality trend of overall performance. Users can graph trends in key statistics covering several tests to identify performance regression quickly.

Report based on stakeholder preference

Almost all load testing solutions allow for the creation of complex and attractive graphs that correlate data. The performance engineer’s first inclination may be to include all available graphs in his or her reporting. However, before creating the report, it’s important to understand the role and the technical skills of the person reading it.

In Tricentis NeoLoad, there is a selection of report types that testers can create. Technical reports show key data and graphs for developers and operations. Executive reports provide concise application performance status and graphical presentations (e.g., pie charts) that make results easier to understand.

Customize reports for the intended audience

Because different stakeholders need specific performance test analysis information, each role may prefer different presentation and data representation methods. Using a performance test tool that offers a variety of data representation makes information sharing effortless. Testers need only augment visuals with brief commentary and share this information cross-functionally.

Remember that different stakeholders have different information needs

Based on respective stakeholder roles and interests, expectations around reporting vary. QA and development stakeholders care about the technical implications of test result conclusions, such as overall code quality and service level thresholds. Other stakeholders may focus on the business implications of test analyses, such as the impact on revenue attainment and customer retention. While this can make sharing performance test results challenging, understanding the information requirements of different stakeholders and what reporting they value makes providing appropriate, on-demand information easier.

Every stakeholder wants to know whether the overall application performance is improving and if service level agreement (SLA) thresholds and internal requirements are met. However, the level of detail and the manner and frequency of report presentation varies significantly by stakeholder role.

Executive stakeholders have specific reporting needs and expectations that differ from other team members because they are responsible for meeting revenue targets and justifying why they are or are not achieved. Senior management needs concise reports that highlight key points and success/risk factors.

Project managers, development leads, and QA managers require the same insights as executive stakeholders, yet they need more frequent and detailed information about test coverage and whether performance tests identify and resolve issues efficiently.

Since much of their interest focuses on opportunities for analysis and improvement, technical team members want information relating to test results, monitoring data, and general observations that are actionable, relevant, and tailored to answer only their questions.

Collaboration delivers success

Everyone wins when the application meets or exceeds business and user expectations. When testers have access to powerful testing tools, they can quickly inform stakeholders and provide the actionable insights that fuel discussion, collaboration, and decision-making.

Using NeoLoad, testers and stakeholders can improve project collaboration and:

  • Create customizable, personalized, and shareable graphical dashboards that allow users and stakeholders to mix several elements and KPIs in the same graph (e.g., average transaction times, average request response time, and total transaction failed for each element)
  • Share test analysis for running or terminated tests with all stakeholders: developers, QA managers, business stakeholders and product owners
  • View test results during runtime in continuous integration (CI) testing environments
  • Extract test data and metrics through NeoLoad open APIs; analyze and correlate third-party test data and tools to build custom reports

Conclusion

Performance test analysis provides insight into the availability, scalability, and response time of the application under test. Reporting and sharing these insights with the extended team members and stakeholders support critical technical and business decisions. By creating useful reports that respond to stakeholder concerns, testers can promote the collaboration that underpins business decisions about application release and deployment risk factors.

Deb Cobb’s profile on LinkedIn

The post was originally published in 2018 and was most recently updated in July 2021.

Performance testing

Learn more about continuous performance testing and how to deliver performance at scale.

Author:

Guest Contributors

Date: Nov. 05, 2020

Related resources