Today, a poorly-performing app isn’t just cursed at—it’s abandoned. Performance issues don’t just taint the user experience—they directly impact the success of the app you’ve worked so hard on.
So why aren’t teams religiously checking that their latest updates don’t degrade performance? Because load testing is hard. It’s tedious to create and maintain load testing scripts for even a relatively simple app. And the faster the app is evolving, the greater the pain.
Open source load testing tools like JMeter and Gatling allow developers and testers to load test without cost-prohibitive tools and infrastructures. However, to be honest, these tools won’t start (or continue) to deliver valuable feedback unless you’re willing to dedicate a fair amount of time to them. Most of us just aren’t too excited about working on things like resolving hundreds of protocol-level discrepancies for each step of a very simple test.
In this 2-part blog series, I’m going to look at why load testing is so hard—then introduce a simpler, faster approach designed specifically for developers and testers working within a modern DevOps practice.
The Problem with Protocol-Level Load Testing
The traditional way of approaching load test scripting is at the protocol level (e.g., HTTP). This includes load testing with open source tools such as JMeter and Gatling, as well as legacy tools including LoadRunner. Although simulating load at the protocol level has the advantage of being able to generate large concurrent load from a single resource, that power comes at a cost. The learning curve is steep and the complexity is easily underestimated.
For a more business-focused example, consider the SAP Fiori demo app. Assume we want to load test two simple actions: navigating to a page and then clicking on the “My Inbox” icon. This actually generates more than 120 HTTP requests at the protocol level.
To Simulate a User, Think Like a Browser
When you start building your load test simulation model, this will quickly translate into thousands of protocol-level requests that you need to faithfully record and then manipulate into a working script. You must review the request and response data, perform some cleanup and extract relevant information to realistically simulate user interactions at a business level. You can’t just think like a user; you also must think like the browser.
You need to consider all the other functions that the browser is automatically handling for you and figure out how you’re going to compensate for that in your load test script. Session handling, cookie header management, authentication, caching, dynamic script parsing and execution, taking information from a response and using it in future requests … all of this needs to be handled by your workload model and script if you want to successfully generate realistic load. Basically, you become responsible for doing whatever is needed to fill the gap between the technical and business level. This requires both time and technical specialization.
You might be thinking, “Okay, we’ll use ‘record and playback’ tools, then.” Theoretically, you could just place a proxy between your browser and the server, record all the traffic going through and be set. Unfortunately, it’s not quite that simple. Even though you’re interacting at the UI level, the testing is still based on the protocol level. Assume we were looking at the traffic associated with one user performing the simple “click the inbox” action described above. When we record the same action for the same user two different times, there are tens if not hundreds of differences in the request payload that we’d need to account for.
Of course, you can resolve those differences with some effort. Unfortunately, when the application changes again, you’re back to square one. The more frequently your application changes, the more painful and frustrating this becomes.