Events
Featured
Check upcoming events

Explore upcoming events to join us virtually or in-person.

View upcoming events
Transformation
Featured
Your transformation toolkit

Advance your enterprise testing strategy with our transformation toolkit.

Learn more
5 best practices for performance testing at the speed of Agile

Performance testing

5 best practices for performance testing at the speed of Agile

By Larry Loeb, Veteran Technology Editor and Author 

Behold, the five best practices that can help maximize advantages while assisting with load testing in an Agile environment. 

1. Make performance SLAs a focus area 

Performance needs to be somewhere on the task board if the team is going to give it attention. Otherwise, it will get ignored. One effective way to ensure inclusion of performance is using your performance service level agreements (SLAs) as acceptance testing for each story. That means the story cannot be “done” if changes will cause the application to fall short of the SLAs. 

This discipline works well if the changes made to the story will affect a relatively small section of the overall code. The performance issues would, therefore, be confined to a portion of the application. 

For SLAs that are general across the entire application, tests should be added to a more extensive list of constraints (which may include functional tests) that will be tested for every story to determine if it meets a minimal “definition of done” without breaking any of the constraints. 

2. Work closely with developers to anticipate change 

Testers should always be thinking about how the stories that are currently being coded will ultimately be tested. They can stay ahead of the curve if they keep engaged with the team, especially the developers. A tester will typically learn about updates occurring on development tasks during the daily stand-ups or the scrum-like meetings that an organization uses to signal progress. 

Specific questions like “Will a change require new load tests?” or “Will this cause errors in the current test scripts?” will keep the tester focused on the changes that will be coming down the pike in any case. The ability to deal with these changes in a proactive manner can only add to a positive outcome. 

3. Integrate with a build server 

In the same way that performance goals need to be attached to the task board, performance tests should be among the recurring tests with every build. This could be done by having the build server initiate the test and including the test results generated within the build tool. It lets the person who kicked off the build see the results and, at the same time, know which changes went into that build. This means that they can be fixed if there is a performance issue that shows up. 

4. CI + nightly build + end-of-sprint load testing 

The difference between continuous integration nightly and post-sprint builds can be significant. It can be the difference between a single change made in a day versus all the changes committed during a sprint. 

So, a performance test for these kinds of builds should start small and use the internal loads that are available. Running a small performance test with the most common scenarios covered by use of a typical load on the application that is produced from your internal load generators will run the fastest. 

The CI builds, and tests, should be run quickly so that one may get the results about how the changes in the build affected the system. These results must get back to the developer that kicked off the CI to be of any practical use. 

5. Realistic tests 

Emulating real-world network conditions is one critical part of a test. Look for a test situation that can provide WAN emulation which limits bandwidth and simulates latency and packet loss. This enables a test in which virtual users will download the content of the web application at a realistic rate. This capability is particularly important when testing a mobile application because mobile devices typically operate with less bandwidth than laptops and desktops and can be significantly affected by changes in latency and packet loss (especially when signal strength is weak). 

The methodology should be to record from any browser or mobile device and then simulate them back during the load tests. Simulating devices is important because of the need for the number of parallel connections for realistic response times and server load. These parallel requests require more connections with the server and can lengthen response times. 

Test globally, outside your firewall. To truly understand how location will affect the performance for your users, you need to look at a solution that is also capable of generating load from cloud servers around the world.