Events
Featured
Tricentis Virtual Summit is back for 2022

This fully online and free-to-attend conference is key to deliver innovation with confidence.

Register now
Transformation
Featured
Your transformation toolkit

Advance your enterprise testing strategy with our transformation toolkit.

Learn more

Maturity Assessment

Maturity Assessment Videos

Transformation > Maturity Assessment > Maturity Assessment Videos

Metrics videos

Automation rate

Automation rate measures the percentage of automated tests in your test suite.

In general, higher automation percentages indicate greater effort savings potentials. Although it is important to track this metric as it shows progress, be cautious to not over emphasize the importance. Rather, look at the automation execution percentage and risk coverage by test automation.

Automation rate is calculated by dividing the number of automated test cases over the total number test cases in your test suite (automated + manual). For example, if you have 50 unique automated test cases and you have 100 unique manual test cases in your test suite, then your automation rate is 50 / (50 + 100) = 33%.

Requirements coverage

Requirements coverage measures the percentage of the requirements that are tested by your test suite.

Measuring the requirements coverage is an important first step to ensure that you are testing the right things. Keep in mind that the requirements provided to you may be incomplete or even absent. Therefore, it is your responsibility to understand the business needs and determine the requirements that need to be tested.

To measure requirements coverage, you will first need to associate the test cases to the requirements. The coverage can then be calculated by dividing the number of requirements (with test cases) by the total number of requirements. For example, if there are 100 requirements and 90 of the requirements have test cases, then the requirements coverage rate is 90 / 100 = 90%.

Keep in mind that a requirement may have only partial coverage because not all possible test scenarios are covered by the test cases. For example, you may have identified 10 test scenarios, which includes the happy path, negative path, edge or corner cases, and multiple data combinations, but only create and execute test cases for 8 of them. Then the coverage for that requirement is only 80%. Expanding on the previous example, if in there are 100 requirements and 50 requirements have 80% coverage, 40 requirements have 90% coverage and 10 requirements have 0% coverage, then the total coverage would be, (50 x 80%) + (40 x 90%) + (10 x 0%) / 100 = (40 + 36) / 100 = 76%.

Test data automation rate

Test Data Automation Rate measures the percentage of your test data preparation that is automated.

Test data preparation is very important to ensure that test executions can run repeatedly and reliably. Test data preparations can sometimes take up to 50% of test activities depending on the requirements of the test scenarios. Therefore, there is a lot of efficiently gains for automating the data preparation tasks. Tracking this metric ensures that proper focus is placed on improving it.

The data preparation automation rate is calculated by dividing the amount of test data that is automated by the total amount of test data used. For example, if all your tests require 1000 different data records, and you have automated the creation/collection of 500 records, then your automation rate is 500 / 1000 = 50%. The metric does not need to be an exact value (as it can be difficult to quantify the amount of test data automated or used).

False positive rate

The false positive rate measures the percentage of failures reported by executed tests that are not related to a defect in the product. False positives can be caused by a multitude of factors from misunderstanding the requirements to user error. However, the most common causes are environment issues, test data issues, and poorly written test cases.

High false positive rates indicate that the test results are unreliable and have a big impact on the overall test cycle time. At the very least, it forces testers and developers to waste time determining if the reported failure is an actual defect. If this happens too often, automated test suites are perceived as unreliable and can cause the team to fall back into manual practices. In addition, high false positive rates block successful integration of testing into the continuous integration pipeline.

False positive rate is calculated by dividing the number of test cases that failed for reasons other than a defect by the total number of test cases executed. As it may be difficult to measure exact numbers, this metric can be estimated.

Report generation time

Report generation time measures the time it takes to generate a report on the quality of an application or project. The time includes gathering all test execution results from the automated and manual tests across different systems and tools as well as any other metrics that can help describe the quality and health of the application or project. Report generation times are often underestimated and not tracked. By tracking this metric, it elevates the visibility of the effort and can lead to improvement in the process.

The report generation time calculation does not need to be exact, but an estimate on how much time is spent on this activity will help you understand if there are opportunities to improve this process.

Effort savings from automation

Efforts Savings from Automation measures the amount of effort that is saved from manual testing because of test automation. This metric can help you quantify the impact of test automation.

Efforts savings is calculated by multiplying the number of tests executed in a month, by the number of times they were executed and then, by the average number of hours each test would take to execute manually. For example, if you have 100 smoke tests that are executed 10x a month and 500 regression tests that are executed 4x a month and each tests takes an average of 15 minutes to execute manually, then the efforts savings would be (100 x 10) + (500 x 4) x 0.25hrs = 750 hours a month.

Efforts savings from Automation can be translated to costs savings by multiplying the time saved by the average cost of a manual testing resource. This amount of cost savings can be useful for promoting the benefits of automation and help drive conversations around resource and tool investments.

Risk coverage

Risk Coverage measures the amount of business risk your test suite covers.

Measuring your test coverage based on risk requires more work than measuring requirements coverage, but the return is well worth it because it provides a better understanding on the risk that you take when releasing a product. Keep in mind that it is impossible to release a bug free product, to test everything or have a 100% risk coverage. Testing is simply the process of understanding the business risk of a release and making the right decision based on that understanding.

There are six calculations that must be made prior to determining Risk Coverage; Damage Class, Frequency Class, Requirement Weight, Requirement Risk, Test Coverage and Requirement Risk Coverage. I’ll walk you through each of these calculations using a banking scenario.

Imagine there are two requirements for a banking website. Requirement one states that users can transfer funds between their accounts, and Requirement two states that users can look up the closest bank branch or ATM.

Let’s first assign the requirements’ Damage and Frequency Class with a value of 1 to 5, one being low and 5 being high. For Requirement one, you may assign a value of 5 to the Damage Class, because the bank will face major fines if a money transfer fails and the user loses money. You may assign a value of 2 to the Frequency Class because the transfer functionality isn’t used that often. With these two numbers, we calculate the Requirement Weight for Requirement one by raising 2 to the exponential of the damage class and adding that with raising 2 to the exponential of the frequency class. In this case, it will be 25 + 22 = 36.

For Requirement two, you may assign a value of 1 to the Damage Class because failing to find a nearby branch or ATM isn’t that damaging to the business. You may assign a value of 3 for the Frequency Class since looking up a branch location is relatively used more often than making monetary transfers. Using these two numbers, the Requirement Weight for Requirement two would be 21 + 23 = 9.

Once all the Requirement Weights have been calculated, we can calculate the Requirement Risk. For Requirement one, the Requirement Risk is its Requirement Weight, 36, divided by the total Requirement Weight (36 + 9), which gives us 36 / (36 + 9) = 80%. Requirement Risk of Requirement 2 is 9 / (9 + 36) = 20%.

The Requirement Risk can also be seen as the business risk impact of the requirement. Note, that the sum of the Requirement Risk of all the requirements should always add up to 100%.

Next, is the Test Coverage calculation. If Requirement one has only 3 out of 5 test cases being tested than the test coverage would be 3 / 5, making the Test Coverage 60%. For Requirement two, if there are 2 out of 5 test cases being tested, then the Test Coverage would be 2 / 5 = 40%

Requirement Risk Coverage can now be calculated by multiplying each requirement’s Requirement Risk by the Test Coverage. For Requirement one it would be 80% x 60% = 48% and the Risk Coverage for Requirement two is 20% x 40% = 8%.

And finally, we sum up the Risk Coverage of all the requirements to get the Total Risk Coverage. In this example there are only 2, we have 48% + 8% = 56% total Risk Coverage for our banking website. That means that the current test suite is only testing 56% of the business risk of this application.

Automation execution rate

Automation execution rate measures the percentage of your automated tests that are executed. Automation execution rate can be measured by different time periods, for example, daily, weekly, and monthly.

Since the return on investment in your automation efforts comes when you execute your automated tests, understanding your automation execution rate can help you measure your ROI. Although higher test execution rates are generally better, be careful not to execute tests simply to increase this metric. Meaning, if the application you’re testing (as well as the test cases, environment and test data) hasn’t changed since the last execution, executing the same tests for this application would only incur machine resource costs.

Automation Execution Rate is measured by dividing the number of automated test cases executed in a certain time period by the total number of automated test cases. For example, if your total automation suite is comprised of 1,000 test cases, and you execute 100 on a daily basis and the other 900 on a weekly basis, then your test execution rate is 10% per day (100 / 1000) and 100% per week (100 + 900 / 1000).

Defect leakage

Defect leakage measures the amount to defects that were not caught in your tests. Defect leakage can be measured in different testing stages, e.g. from systems to system integration, from system integration to user acceptance and from user acceptance into production.

Measuring the defect leakage between stages can help show the effectiveness of testing within the different stages. However, the most important stage to measure defect leakage would be the leakage into production as defects found in production are the costliest to resolve.

Defect leakage into a testing stage is calculated by dividing the number of defects found in the testing stage, by the same number, plus the number of defects found in the prior testing stages. For example, if there were 100 defects found in the user acceptance testing stage and 900 defects were found before the user acceptance stage, then the defect leakage to the UAT stage is 100 / (900 + 100) = 10%.

Test cycle time

Test cycle time measures the time elapsed for the testing process. The testing cycle can be broken down into five distinct activities, test data preparation, test environment preparation, test execution, test analysis, and wait time. 

Test cycle time is a good way to measure how efficient your testing process is. To ensure that testing does not become a bottleneck for development and innovation, you want to shorten the test cycle time as much as possible. Tracking the breakdown of the various testing activities in the test cycle time will help you find opportunities to improve your process. 

Test cycle time is measured starting when a build is submitted and undergoes all the testing stages (from unit, to end-to-end or, even UAT) until the build is either accepted (can be deployed) or rejected (defects needs to be fixed to undergo another round of testing).

Get started with the
Continuous Testing Maturity Assessment