Blog

Establishing a performance test strategy for a microservice-oriented application

  • 8 September 2023
  • 0 replies
  • 131 views
Establishing a performance test strategy for a microservice-oriented application
Userlevel 4
Badge +2

By Bob Reselman, Software Developer and Technology Journalist

Creating an effective performance testing strategy is hard enough when an application is essentially monolithic. When an application is composed of hundreds, if not thousands, of distributed microservices to operate at web scale, any testing strategy can become unwieldy. Despite the difficulty, microservice-oriented applications (MOAs) must be thoroughly tested. The challenge: how to devise a plan for testing them.

The first step toward creating an effective performance testing strategy for microservice-oriented applications is to understand how an MOA works and how it differs from a monolithic application. Once a solid understanding is in place, adhering to a few basic rules of thumb when considering testing an MOA will help make devising a strategy a much easier undertaking.

The rise of microservices

Microservices are not going away. Recent data confirms that interest continues to grow. They’re fast to revise and deploy, and cost-effective to run when hosted with an industrial-strength cloud provider. Google Trends reports that the interest in microservice has increased dramatically over the last five years.

Google Trends report on microservice

If nothing else, we’re going to keep seeing a dramatic proliferation of microservices, particularly when the Internet of Things (IoT) interactions become the dominant computing activity on the web. Companies who fail to devise performance strategies for microservice-oriented applications do so at their own risk.

Monolithic vs. microservice-oriented applications

In a traditional “old school” monolithic application, all services exist/share memory in a single machine. A more modern approach is to distribute services over multiple machines where each lives in the same rack or at least the same data center. Whether one or many, all the services in a monolithic application know how to find the others required. Also, the interservice latency can be measured in nanoseconds. (See Fig. 1 below)

Monolithic vs. Microservice oriented applications

Fig 1: Microservice architectures run the risk of high interservice latency and ambiguous discovery.

Microservices are highly distributed, sometimes across multiple data centers. Some nearby; others might perhaps spread across the globe. Each microservice is highly autonomous and publishes a distinct interface (which can change at a moment’s notice). Also, a microservice carries its data and has its own set of contextual semantics. The latency between services is measured in milliseconds.

Typically, the deployment unit for a monolithic application is at least one virtual machine, although, some might require specific hardware. In such cases, the deployment unit is the physical machine itself.

The deployment unit for a microservice is a container. For companies opting for serverless computing, the deployment unit is the function (e.g., AWS Lambda functions or Google Cloud Functions). The reason for the container/serverless preferences is that they are much faster to deploy than a virtual machine. They also “spin up” quickly. As mentioned, the microservice brings speed and autonomy to deployment. In a monolithic application, no single component, for example, can be released independently of the entire app (see withdrawals for the banking app shown above, Fig. 1). In an MOA, any microservice can be released at any time. Thus, each team responsible for developing, testing, and supporting a particular microservice enjoys its own release cycle. Such independence is valuable. But it also makes microservices more challenging to manage. Still, given the benefit of quickness to market, companies who’ve embraced MOAs are more than willing to tolerate the increased management complexity.

Do you want scale? Consider that Netflix reports 4,000 deployments/day.

Rules of thumb for establishing an MOA performance testing strategy

Now that we’ve reviewed how microservices work, let’s take a look at some considerations when establishing a viable strategy for performance testing them.

Respect service autonomy

As previously stated, a fundamental premise of microservice development is that one team is responsible for every aspect of the service. This includes the product (yes, the microservice is a product) and project management, development, testing, release, and support. Hence, the design and execution of performance tests for the service need to be done by the team owner of the service. Also, this team is responsible for publishing test results.

Sometimes to save money, a company will assign one team to take ownership of performance testing all microservices. Having a single team external to the microservice team control testing is useful in terms of MOA integration testing. You need that “birds-eye” view to ensure overall application performance. When testing the microservice as a unit of deployment, those closest to the microservice are best qualified to test it.

The questions this brings: How does the organization know that the microservice team is conducting the tests in a meaningful way? What are the criteria used for passing/failing? Luckily, this is where service level agreements show their value.

Effective testing = well-defined SLAs

Companies spend money supporting microservice teams because they want valuable results that add value to the bottom line. As the saying goes, “if wishes were horses, then beggars would ride.” Many a time, the reality stands in the way of achievement.

Sometimes the distance between desire and reality is vast. Others, it’s close together. The trick is to align passion and truth in the same place, same time. The way to do this is through the service level agreement (SLA).

The SLA is how the infinite demands of desire and the troublesome restraints of reality get transformed into a workable solution. A well defined SLA is indispensable, particularly when it comes to verifying microservice behavior by way of performance testing.

While the microservice team is best suited to determine what is possible in terms of service behavior and verification (through designing/executing its performance tests), it does not live in a vacuum. A microservice is one among many serving single or multiple MOAs. At some point, the service needs to fulfill the “big picture” performance behavior (via SLA).

A well-defined SLA creates a set of shared expectations that serve as the underlying principles for developing, testing, reporting, and improving a particular microservice. The clearer the SLA, the easier it is for the microservice team to create, execute, and report on performance tests in a way that accurately meets the needs. A loosely defined SLA creates too many ambiguities in the development process.

Enable distributed tracing throughout the enterprise

In the old days, when activity between components occurred in shared memory or among a set of well-known machines, monitoring app behavior involved nothing more than installing monitoring software on the machines in which the apps/components ran.

MOAs are different. Containers and serverless functions that realize a particular microservice get spun up and down to meet the needs of the moment at speeds well beyond human capabilities. The only way to observe this type of temporary behavior is to make sure that distributed tracing is enabled throughout the enterprise.

Distributed tracing means that an agent is installed on a virtual machine(s) and containers that are hosting the service. (Functions in a serverless computing environment are hosted in a container.) Meanwhile, an agent observes environmental activity, sending data back to a master collector that keeps track of and reports on all agent activity.

Modern practice is to install an agent on a host automatically at a host’s creation time. Again, a particular host can be a VM, a container, or even a physical computer. Automatic installation ensures comprehensive tracing even in transient situations.

Identifying and adopting a particular distributed tracing framework is a big decision outside of the microservice team. Such judgment has to be made at the enterprise level. Otherwise, a company will never have the depth and breadth of performance test data needed to perform proper analysis and effective troubleshooting.

If you don’t have distributed tracing in force, the scope of your performance testing activity is going to be limited.

Test the part. Test the whole.

Microservice-oriented applications are synergetic. They combine the behavior of a multitude of microservices into a unified whole greater than the sum of its parts. A single microservice has minimal value. Its worth is realized when combined with the behavior of others.

A paradox exists. Testing the whole means that you lose access to the granularity revealed when testing the part. If you examine only a piece, you can’t see the whole. So, when determining a test strategy for microservices, the wise approach is to performance test both.

This means that each microservice team needs to design and conduct performance testing of its product according to a well-defined SLA. Also, enterprise-level teams need to design and do performance testing as is appropriate to the entire microservice-oriented application and a well-defined SLA.

Putting it all together

MOAs bring a new dimension to performance testing. Given the autonomous nature of microservice development, it’s on the teams making the microservice to design/conduct performance tests. These teams do not operate alone. Microservices are part of a greater whole. Therefore, any strategy for performance testing microservices must include both granular testing at the service level as well as comprehensive testing for the MOA overall.

All tests need to be conducted according to well-defined service level agreements that are the result of negotiations between teams overseeing the microservice and the enterprise-level designers who will be consuming those services.

Finally, the MOA environments must support distributed tracing. Without access to operational data correlated from all microservices regardless of location, performance test results will lack the necessary detail for adequate analysis and efficient troubleshooting.

Microservices are here to stay. They allow timely, independent release cycles. Having a viable test strategy encompassing the entire scope of microservice-oriented applications and the microservices themselves is critical for companies who want to bring code to market in an accelerated way.

This post was originally published in August 2019 and was most recently updated in July 2021.

Bob Reselman’s profile on LinkedIn


0 replies

Be the first to reply!

Reply