Tricentis recently introduced a new set of out-of-the-box, executive-level reports and charts, which are available in the qTest Insights dashboard gallery. The reports are designed to help QA teams convey the progress of all strategic quality initiatives to executives and others who are more interested in overall application quality than detailed, daily QA metrics. These reports focus on test automation initiatives, speed to market, time to resolution, risk reduction and other areas that go beyond traditional testing metrics.
Read below to learn about how you can use these charts to better demonstrate the business value of QA.
Track and Measure Test Automation Initiatives
Your teams are making the push for more test automation for a number of reasons. However, what’s the baseline for measuring success? Are you automating more or less with each release? Can you identify an upward trend? Our first set of executive charts zero in on helping teams assess and communicate their overall test automation progress.
Test Case Automation Initiative Progress
First and foremost, teams want to compare manual and automated test case creation. As teams create more manual tests, they will eventually need some form of automation behind them. Comparing the number of test cases created from week to week to the number that are executed in each release gives teams a clearer understanding of their test automation progress.
Requirement Automation Initiative Progress
To achieve in-sprint test automation, it’s best to measure not only how much automation you’re doing, but also which requirements have test automation coverage. Usually teams will neglect to invest in test automation for new features. Instead, they will perform manual tests first and automate later. Although this may seem like a quick win, teams will eventually be faced with a backlog of tests that need to be automated. The following requirement automation charts help identify the most pressing automation initiatives.
Automation Initiative – Defects Found
After you have completed all manual and automated testing, a big question becomes, “Which type of tests produced the most defects?” Knowing which types of tests yielded more defects gives insight into how testing time should be spent in the future. For example, if all your defects come from your regression automaton, there might be something wrong with your regression test cycles. If most of your manual tests are finding severity 1 defects, this might indicate that more manual test time should be allocated in your allotted test cycle times.
Improving Tester and Developer Resolution Time
Developer and tester relationships are getting better. When teams can understand and measure the time to resolution during test cycles, these relationships can improve even further. From the initial test failure to the final test pass, teams can now calculate the in-between time, or the time it takes for a developer to fix the system to be retested. Some tests will pass on their first execution, and some tests will finally fail after hundreds of automated regression executions. With this new Time to Resolution chart, we can now measure the time it takes to get those tests from red to green.
Decreasing Time to Market
We have all been there. One release, your test cycles are complete right on time. The next, you’re having to re-prioritize test runs to make the deadline. With our new Speed to Market charts, teams can analyze changes in test cycle time from release to release. Let’s say it took 10 days to complete five test cycles for release one. When release two approaches, we have a baseline to which to compare it. If it ends up taking 20 days in release two to complete the same five test cycles, then it’s clear there is a problem that needs to be addressed.
Reduce Risk During Testing
The majority of our customers leverage our out-of-the-box Jira integration and use the component(s) field for test planning. Through conversations with customers, we found that as more components enter the release scope, the more risk the test team takes on. Our new Risk Reduction chart calculates the number of components being tested in a release, then compares them with the number of unique tests associated for each component. This chart helps teams ensure test planning correlates to number of components for test.