3. Don’t guess, test to a service level agreement
Testing to a service level agreement might seem like an obvious thing to do. But, many companies are organized in such a way that testing personnel and product management never know about each other, let alone interact in an informative way. This is very un-Agile, and it’s a mistake. Testers waste valuable resources imagining the operational conditions to meet. Product manager ends up being frustrated when the expectation is not met.
The easier way to avoid this problem altogether is to put products and testing into a room — real or virtual — and come up with a Service Level Agreement (SLA) by which product management expectations are well known and upon which good load tests can be designed and implemented. The SLA does not need to be written in stone and never change, but if new needs evolve and become time-consuming beyond the established SLA, manage and communicate the changes needed. The important thing is that the document provides the concrete, common reference that sets the standards by which testing will be conducted, and products will be accepted.
4. Don’t pin all your hopes on one runtime environment, use many
True story: A while back I was working on a microservice project that was deployed on one of the Big 3 Service providers (AWS/Azure/GC, I leave it to you to guess). We dedicated our deployment to a Midwest US data center. During load testing, we discovered that the microservice took too long to execute. We couldn’t figure out why. The code ran great in-house. So we did a no-brainer. We deployed the code to another region in Asia. Result? Everything was fine.
Did Asia have a better data center than Midwest US? We couldn’t say. But we had the numbers to prove there was a difference.
So what’s the takeaway? Dedicating our deployment to one region was a mistake. We learned our lesson. Moving forward we ran our load tests in a variety of regions, particularly when we were testing for regression.
5. Don’t assume request/response is all there is to measure
While it’s true that load testing the performance of the requests and responses associated with a microservice is important, there’s more to be considered. Behind the scenes there can be a lot more activity that needs to be observed and measured – reading and writing to message queues and memory caches for example. Thus, when implementing microservices load testing, it’s important to have monitors in place to observe the behavior of all the components associated with the microservice. There are reasons for slow responses from a microservice, and they need to be known. Typically a good Application Performance Management or API Management solution provides the system monitors required to get the information associated with a poorly performing load test.
Also, keep in mind that for full stack applications, a microservice that performs well under direct load test might have trouble when it comes to User Experience (UX) testing. For example, a small alteration in a data structure might have unintentional side effects in UI behavior. For those companies that publish microservices as components of a larger application, testing performance at the UI level will have benefits.
The important thing to remember is that many times, to get a good performance profile, online load testing needs to go beyond simply measuring response times for REST calls to a single microservice.