Blog

Vanity metrics vs. business impact metrics in software testing – what are you tracking?

  • 3 August 2023
  • 0 replies
  • 54 views
Vanity metrics vs. business impact metrics in software testing – what are you tracking?
Badge +1

“Those who succeed don’t sit around making to-do lists – they create actionable plans with clearly defined outcomes and conquer them. When you know what you want, why you want it, and understand how to make your goal a reality, you’ll become a master of time management and take control of the situation.” – Tony Robbins, Inspirational Speaker

Beginning an article on software testing with a quote from Tony Robbins might seem bizarre. What does Tony Robbins know about software testing? Probably nothing. But he does know a few things about achieving outcomes, whether it be fitness goals, financial goals, or life goals. His formula to success has helped millions around the world.

It’s clear that there is an ever-increasing pressure placed on product teams to deliver faster with a higher level of quality. The good news is that technological improvements like Artificial Intelligence (AI) in Software Testing have accelerated the pace at which organizations can create automated tests. But just because you’re creating automated tests faster doesn’t necessarily mean that you’re solving the problems that plague your unique organization.

Take for example a logistics company who partners closely with trucking companies to ship products globally. Every company today is a “digital enterprise” to a degree — and this applies to both logistics and trucking companies. These organizations are likely using Electronic Data Interchange (EDI) to notify one another of a load tender, an acceptance or denial of that tender, and status updates of where the shipment is in its journey. Each of these e-notifications is given a unique identifier (e.g., EDI 204, 214, etc.) with specific segments and elements. Within each electronic document, the quality of the data often determines whether the receiving system will accept or reject the electronic transmission. What’s more, if the integrity of the data on these EDI transactions is not accurate, this can cost both companies dearly in shipping delays. Therein is where the problems can start.

Although many companies focus mostly on the speed of testing, other companies have a perpetual data quality issue on their hands from upstream and downstream systems that they can’t seem to resolve. By marrying emerging technology like AI with tried-and-true automated data integrity testing, companies can tackle two objectives simultaneously. But let’s be more precise in how we define the “objectives” by formulating them in terms of business outcomes that are measurable. At Tricentis, we not only provide a world-class continuous testing platform, but pair that with a transformation toolkit that helps define outcomes, create roadmaps, and calculate the impact of your investments on your intended outcomes.

“We can’t control the wind, but we can direct the sails.”

Efficiency vs. effectiveness

Business is unpredictable, but preparing your people, processes, and tools to withstand the future hurricane that may or may not happen is part of the journey. In fact, it’s wise to prepare. To direct your sails, I encourage you to prescriptively brainstorm how you categorize your QA and Testing Metrics, and how you prioritize your efforts. If your focus is first and foremost on speed of software delivery, split that into two prongs:

  1. Test Efficiency
  2. Automation Efficiency

 

Under Test Efficiency, start by tracking execution time, then execution rate, and finally test cycle time. As you get your “testing legs” working the further you travel out to sea, start tracking Automation Efficiency with metrics like Automation Rate, False Positive Rate, Test Data Automation Rate, and Test Maintenance effort. Our transformation toolkit provides all the details on how to track these metrics related to speed. Beyond how fast your boat is moving, let’s also focus our attention on the quality of our sailing prowess. We can also split quality into two prongs:

  1. Test Effectiveness
  2. Automation Effectiveness

 

With Test Effectiveness, there are several ways you can chart your journey over the course of time. A few examples are monitoring your pass/fail rate, tracking your requirements coverage, and calculating your defect leakage rate. Automation effectiveness can be quantified by analyzing bugs found in your manual test execution cycles vs. automated test cycles, or how much of your true business risk is automated from a testing perspective.

What are vanity metrics?

Everyone likes to tell a good story backed up with numbers to reinforce their agenda or narrative. Pairing a talk track with some impressive numbers makes it more credible, more authentic, more believable, and more scientific in nature. It’s no different for QA and Test Managers who are more interested in reporting the number of tests passed in each testing cycle, or the sheer quantity of defects uncovered. Indeed, the numbers might look impressive to an untrained eye and certainly demonstrate that the QA team is productive, but is that “productivity” leading to positive business outcomes? Or is it “busy work” generating vanity metrics to distract from the real problems brewing somewhere in your delivery pipeline?

If you suspect that vanity metrics are being reported to you – ask yourself these 5 what if scenarios:

  • What if the manual or automated tests executed are low quality without enough robust data or branch logic?
  • What if the defects uncovered are mostly trivial, cosmetic, or low severity?
  • What if the tests are too discrete without an effective end-to-end testing strategy?
  • What if we are over-testing, while wasting precious data center resources and manual labor?
  • What if the metrics are…. well, smoke and mirrors?!

If your suspicion is accurate across any of these what-if questions – your business is at risk.

Wouldn’t it make sense to prioritize and monitor QA metrics like Defect Detection Percentage (DDP) or Defect Leakage Rate that tie closer to true business outcomes? And not just Defect Leakage Rate from one sub-environment to the next (e.g., System Testing to User Acceptance Testing), but from your SAP sub-production environment (Ox1) to your production environment (Px1) that customers and real end users are using to achieve business outcomes.

Before we go any further, let’s get a second opinion on vanity metrics. According to Tableau, vanity metrics are metrics that make you look good to others but do not help you understand your own performance in a way that informs future strategies. We see it in the media, we see it with business leaders, and it’s no different with software testing leaders.

Nobody wants to look bad and we’re all trying to find our own unique way to add value. Sometimes our perceived definition of ‘value’ is not what really matters to our customers. In the case of QA and Testing Managers, the customers are the people using the software we’re responsible for ensuring quality. Vanity metrics don’t matter. They care about incidents that tie to their business, like a delayed inbound shipment, mismatched load tenders, inaccurate purchase order data, and so on.

Adopt a two-prong QA metrics reporting strategy

Vanity metrics have their place in QA metric reports, but certainly not at the top. The most important metrics reported to organizational stakeholders, investors, and colleagues should have a clear tie-in to business outcomes. For instance, if the organizational directive is to reduce operating expenses, two very important metrics come to mind:

  • QA Labor Costs
  • Defect Leakage Rate

You should consider publishing two sets of metrics to your business leaders. The first set of metrics should be confidential in nature (due to the fact they may contain hourly rates of contractors and employees) that calculate hard dollar savings from your Quality Assurance initiatives. Tricentis’ transformation toolkit offers a baked in business value calculator that helps organizations make confident, strategic investment decisions from hard data points. Most ROI calculators tend to focus only on the cost factor, but our free-to-use business value calculator provides a comprehensive view of the business impact factors, including cost savings, in addition to quality improvements and speed gains. This will equip you to deliver a persuasive case for your testing transformation budget.

By using multiple factors in your ROI calculation, you can better champion QA platform expansion throughout your organization. Let’s face it, business leaders and stakeholders have varying business objectives. Not everyone is after the same end goal. With a multi-factor approach to ROI calculation, you’re in a better position to explain away the naysayers and focus on how your quality platform can outperform the status quo.

Once your metrics and reporting strategy is tied directly to business outcomes, your second focus should be to generate and report on metrics that show your team’s commitment to the QA process, which may or may not include a few vanity metrics. A few vanity metrics for added flair to a report sent out to the business is not necessarily a bad thing, but I would strongly advise against leading with vanity metrics when speaking to investors, board members, or senior leadership. Although the commitment of your team is appreciated, most stakeholders are more interested in the result — not the “participation trophy” of how many tests were executed last month, or how many bugs your team found in the last testing cycle.


0 replies

Be the first to reply!

Reply