Your performance testing is biased. It’s not a criticism; it’s a reality. In fact, all performance testing (and all testing, for that matter) is inevitably biased. Let’s confront the elephant in the room and take a hard look at what those biases involve—as well as how they ultimately impact the accuracy and efficacy of our performance testing.

As human beings, we’re all guilty of cognitive biases. A cognitive bias refers to “a systematic pattern of deviation from norm or rationality in judgment, whereby inferences about other people and situations may be drawn in an illogical fashion.” Cognitive biases are part of the human condition; we can only hope to be aware of our biases, not eliminate them. Even in the relatively narrow context of performance testing, we’re susceptible to selecting the story we find interesting, compelling, familiar, and/or confirmatory – and then using data to support the narratives we like.

Many people who work in IT think they know how to performance test: “Just apply the same load as production in a test environment, and then you know if the system will scale and we can go live”. But consider:

  • How many assumptions and shortcuts do we take in building our load model?
  • How many factors do we ignore in declaring equivalency between production and test environments?
  • How thorough is our analysis going to be?
  • What will we really decide to do – or not do – based on the results?

The fields of software/hardware/network/systems engineering are stuffed with common biases. Both these and biases more closely associated with performance testing/engineering are worthy of reflection.

  1. What about social biases? Ever spent more time and energy discussing metrics that all members are already familiar with? That could be considered Shared Information bias.
  2. Ever bolstered or defended the status quo, or been on the other side of that? System Justification bias.
  3. How about memory biases? Forgetting information that can otherwise be found online or is recorded somewhere? Google Effect/Digital Amnesia.
  4. How about postmortem analysis of an event becoming less accurate because of interference from the post event news of how your site crashed and pressure from management? Misinformation Effect.
  5. Some performance testers still push back against virtualized and cloud load injectors, well after modern computing has moved on. If you ask why, there is typically an anecdote of resource overcommitment at the virtualization host level, so now it is impossible to trust data that isn’t generated by physical machines (with their own operating system background processes, resource contention, etc). Anchoring/Focalism.
  6. Unprofitably spending time debating individual script steps and test data characteristics, instead of researching the load model? Debating a stray CPU spike instead of researching a flood of errors midtest?
  7. Continuing to run tests that don’t yield useful information, or trying to salvage obsolete test artifacts, because someone thinks a lot of valuable time was spent building them? Sunk Cost.

If you’re intrigued by these issues, please consider attending WOPR26, which will focus on exploring them in depth.  WOPR has been around since 2003, and it’s earned a reputation for stimulating lively, hype-free discussions among the world’s top performance testers and engineers. WOPR is NOT a conference. It’s an intimate, interactive gathering (< 20-30 people) where practitioners can share experiences, network with their peers, and help build a community of professionals with common interests.

I’m the “Content Owner” of WOPR26, which will be held in Melbourne, Australia on March 5-7, 2018. This particular workshop is designed to explore the wide range of cognitive biases and their effect on your performance testing experiences. What are the common reasoning mistakes we make, and why are we still making them? What are the mistakes our stakeholders make, and how can we help in avoiding them?

This could be related to the approach you took, the tools you used, the analysis you made, or how that analysis was received and consumed. Part psychological, part technical, you might leave this workshop feeling a little less biased (or at least more aware of your biases), and maybe even have a cathartic release in having shared and reflected on your experiences and mistakes.

If you’re interested in sharing your insights—or just learning more about the topic—submit your application. We’ll be sending out invitations mid-December, so the sooner, the better.

Leave a Reply

X
X