

Writing tests is a key part of building and maintaining quality software. Good tests exercise your code paths and document that code by showing how it behaves under specific circumstances. Tests also prevent regression when you modify code. Given that tests are so important, many software teams seek to measure how much of their code is covered by their tests. We call this metric test coverage, and you’re here to learn more about that today.
What is software test coverage?
Software test coverage is a metric that measures the amount of code in a repository “covered” by tests. Usually, this is measured directly by how many lines of code execute during a run of 100% of the tests in your test suite.
For example, let’s say that you have a function that accepts a single parameter, and that function branches in an if/else block to handle two possible outcomes. If you write two tests, one that passes through the if section and another that passes through the else section, you would measure 100% on the test coverage metric.
Now, let’s say that you add a third if-else block to your function. Suddenly, your two tests don’t provide 100% coverage. Because you’ve added a third line of executable code, you’ve migrated from 100% test coverage to 67%.
What does software test coverage measure?
Test coverage metrics measure how many lines of executable code execute during a test run. This means that the metric won’t cover things like configuration files or comments. Test coverage usually doesn’t measure things like class or function definitions either.
The easy way to think of this is that test coverage is using the bodies of the functions that you write as the denominator for the test coverage ratio. Then, when you execute a test suite, that suite internally marks each line of code the test suite executes. That forms the numerator of the ratio.
At the end of your test run, the test suite will report the ratio of lines executed to total lines of executable code, providing a “test coverage” ratio.
This metric is particularly useful when combined with test automation, which will then report how much the coverage ratio has changed since the last time the automated test suite ran. Some companies will even reject change requests that decrease overall test coverage from merging into the main branch of a repository.
Test coverage is a useful metric for learning certain things about your tests. But it isn’t a perfect metric. You don’t want to rely too much on test coverage as a proxy for code quality.
How can we ensure test coverage in our software?
If you have an existing or new codebase, and you want to ensure high levels of test coverage, the process is pretty simple: write tests, and measure how much of your code they cover! Tools to measure test coverage are built into every major software testing library.
As such, if your team has decided that you need to improve your overall test coverage, the process is straightforward:
Identify the tool that you’ll use to measure your test coverage
Not every tool will measure things exactly the same. Switching between different tools is likely to give you noisy numbers, so pick your canonical tool and stick with it.
Validate that the tool is giving you useful information
If you have a large existing code base, there might be a lot of data you’re prone to lose in the noise of all that pre-existing code. Instead, write a small amount of code, then write some tests. Validate that you see the metric increasing as you expect. This helps you build a mental model of how the coverage metric works, which you can then apply to your existing codebase.
If you don’t have it, hook up test automation
Without test automation, the question of code coverage is minimized. You need to automatically run your tests on every build of your software to ensure that your tests are providing their maximum value.
Record and surface your test coverage metric
Once you’ve landed on a tool and you understand how the tool works, the next step is to record test coverage on every automatic run of your tests.
For teams that are more intense about test coverage, you can configure your continuous delivery system to fail any build that decreases test coverage. Less intense teams might simply record the number and the change over the last 30 days.
Identify where you’re missing tests, and write them
Once you’re measuring and surfacing the metric, the next step is to hop in and start improving your coverage. You do that by identifying code with missing test coverage, and you write good tests for that code.
A common pitfall for software teams is to overthink improving test coverage. They try to prematurely optimize their testing metrics, leading them to reject a lifeboat because they don’t like the color of the paint.
They try to prematurely optimize their testing metrics, leading them to reject a lifeboat because they don’t like the color of the paint
It is always better to write good tests for your software than to try to create the perfect testing suite. If you’re concerned about test coverage as a metric, but you don’t currently have any testing metrics, don’t let the perfect be the enemy of better.
Why is test coverage worth tracking?
Tests are a crucial part of a healthy software development lifecycle. Automated testing ensures that when you ship software, you know that it works. Moreover, those automated tests prevent you from shipping code that breaks features your users already rely on.
The importance of good testing means that you want to cover your code with high-quality tests as thoroughly as you can.
That’s why test coverage is worth tracking. If you miss functionality with your automated tests, you’re accepting risk for your software. Risk that the software doesn’t work the way you think, or risk that new updates will introduce regressions that break existing functionality.
Both of those are bad outcomes. Oftentimes, you don’t even know that this risk exists. Measuring test coverage allows you to quantify this risk. As you quantify the risk, you can isolate it, then evaluate how much risk the missing tests pose. From there, you can prioritize the high-value tests, to make sure that you eliminate as much risk as possible.
What doesn’t test coverage measure?
Test coverage is a useful metric for learning certain things about your tests. But it isn’t a perfect metric. You don’t want to rely too much on test coverage as a proxy for code quality. Let’s dive into why.
Test quality
This is the biggest flaw with test coverage as a metric. Inexperienced technologists believe that the right target number for test coverage is 100%. They believe that because tests are important, your tests should cover every line of executable code.
This view is short-sighted. If you think back to our example function earlier, you could write a test that executes all three branches of your function, and you’d have 100% test coverage. But if your tests don’t actually test any assertions, your 100% test coverage metric is useless.
As software engineer Greg Foster, from Graphite, notes: “The question ‘How much code coverage is enough?’ has no universal answer. Instead of pursuing arbitrary percentages, focus on the quality and strategic placement of your tests.”
Code quality
While it’s true that exercising a software testing lifecycle does generally improve your overall code quality, higher test coverage does not necessarily result in better code. Instead, the result may simply be that your team writes complicated, convoluted tests to cover their complicated, convoluted code.
Cycle time
Software cycle time is a measurement of how much time it takes to complete a new feature or entire software project. Reducing cycle time is a way to improve the value your software team delivers to your customers.
Requiring high software test coverage metrics often negatively impacts cycle times. This is because changing any code usually means that you break existing tests. What’s more, any new code that you write will require additional tests.
While these are negatives toward high test coverage, it doesn’t mean that you should avoid writing those tests. It’s just a consequence of the approach.
Developer happiness
This is a bit of a cheeky point, but it’s true. Pushing for very high test coverage metrics often leaves developers writing tedious tests that provide minimal value to the code base. Requiring developers to do this work for extended periods of time might lead to burnout and dissatisfaction with their jobs.
How should you think about test coverage?
The traditional ratio approach to test coverage as a metric has some obvious downsides, as we’ve noted. But that doesn’t mean that test coverage is a bust. Instead, you can rethink how you approach test coverage to improve the benefits and reduce some of the downsides.
Here are some ways that you can rethink test coverage to improve things.
Product coverage
Instead of approaching test coverage as a pure metric of lines executed versus total lines of code, you can instead measure how many key features of your product are covered by quality tests.
A metric like this is more difficult to automatically measure, but has long-term benefits compared to rudimentary test coverage metrics because you know that you’re testing your critical code paths.
“Code coverage” pretty much always signifies the number of lines that are executed during your test runs
Test Coverage and Code Coverage
It’s worth taking a moment here to talk about the terms “test coverage” and “code coverage.” In reality, these terms are used somewhat interchangeably. We’ve introduced a third term here: “product coverage,” which seeks to tease out the distinction between those two concepts.
“Code coverage” pretty much always signifies the number of lines that are executed during your test runs. To some people, “test coverage” is a synonym, but to others, “test coverage” represents a question around how many of your important features are covered by tests.
That’s why we’ve introduced the concept of “product coverage” to avoid ambiguity between these terms.
If you’re thinking about a project to improve your “test” or “code” coverage, it’s important to be clear about what exactly you’re trying to improve, and what benefits you think you’ll gain, right from the start.
Risk coverage
Another criticism of traditional test coverage metrics is that it treats every line of code as equivalent. Realistically, not all lines of code in your software are created equal. A code change that updates the language on an error message is not nearly as important as a change that updates customer billing.
As such, one way to approach test coverage metrics is to instead measure how much of your most critical code is covered by tests.
What are the best practices for software test coverage?
If you’re still up in the air about measuring test coverage, you shouldn’t be. You should absolutely measure how much of your code is covered by tests and seek to find the sweet spot for your team and project. But how do you find that sweet spot? Let’s go through a couple of approaches.
1. Identify your key testing metrics
As we’ve noted, pure ratio-based test coverage leaves a lot to be desired. Simply saying “our tests cover X% of our software” doesn’t tell anyone much of any value. Instead, the key to good test coverage is to first think about what you want your tests to do for your team.
Do you have high levels of code churn? Test coverage might help you prevent regressions when you ship changes. Do you regularly ship code that doesn’t meet client requirements? Your test coverage might instead focus on enumerating all of the requirements in a feature, then ensuring that the software meets those requirements.
When you think about what you need your tests to do for you, you’ll find it much easier to know how to measure test coverage later on.
2. Adopt a high-quality automated testing platform
Test coverage by itself is of no value if you never run your tests. The best unit test suite doesn’t do anything if a broken test doesn’t prevent developers from shipping new builds. That’s why high-quality test management tools are a hard requirement for getting return on your test investment.
3. Invest in writing tests
This might feel like it goes without saying, but it doesn’t. The biggest mistake that development teams make with test coverage is not investing any time to write good tests. Software tests really are a very valuable part of your SDLC, but they take time to write and debug.
If you’re thinking about adopting test coverage metrics for your team, that means you recognize the value that testing provides. But if you expect that investing in tests won’t come with any impact to development speed, you’ll quickly find tests falling right back by the wayside.
Prioritize very valuable untested code, but move less valuable code further down your backlog
Increasing test coverage without overdoing it
Imagine that you were trying to identify the metrics necessary for building a good software team. Someone might suggest to you that software developers do a lot of typing, so you should measure how many words per minute each of them can type in a typing test.
On the surface level, this makes a certain amount of sense. And in reality, you mostly do want your software developers to be strong typists. If they’re constantly mis-typing what they intend to insert into their code editor, that makes it more difficult for them to write and ship useful software.
It also increases the likelihood of bugs slipping through the coding process, as they unintentionally do things like misspell words.
So, while it’s a good thing to have high-quality typists, improving the typing skills of your development team will quickly hit a point of diminishing returns. The same is true for test coverage. In the aggregate, test coverage is better than the alternative.
But it’s not a metric that tells you everything you need to know, and constantly trying to shave off smaller and smaller slices of uncovered code will provide diminishing value.
Instead, the right way to approach things is to increase your test coverage by targeting your most valuable untested code first. Prioritize very valuable untested code, but move less valuable code further down your backlog. That kind of work is great as long-term tech debt, but you shouldn’t prioritize it.
Why is test automation critical for test coverage?
We’ve talked a lot about test automation in this post. If you’re not someone who’s currently using test automation, that might seem odd. What does test automation have to do with test coverage? In reality, it’s pretty simple, and once you try adopting test automation yourself, you will likely see what all the fuss is about pretty quickly.
Human beings are flaky creatures. We forget to do things. Kind of a lot. If we rely on human memory to run and analyze tests, the reality of most workplaces is that we’ll do that work for a little while. But as soon as something bigger comes up, or something difficult gets in the way, we’ll just stop doing that.
Automation Eliminates Human Error and Inconsistency
That slope is pretty slippery. If they find out that skipping a test run or two doesn’t have immediate major consequences, people will start skipping it more often. Pretty soon, teams are only running tests once a week, or even less often. There are likely big gaps in test coverage, as new features come online with minimal or no supporting tests.
This is why test automation is critical for test coverage. When you integrate automated testing directly into your build process and fail the build if any test fails, you put immediate consequences on shipping code that breaks a test.
That automation also allows you to track the test coverage metric on every build. That direct connection allows you to identify and remediate untested code much more quickly than spotty metric recording.
If you’re just starting out, the idea of trying to lump test automation with test coverage metric measurement probably feels overwhelming. You know that something is wrong, and you probably even have a pretty good idea of how to fix it. But actually doing the work feels like a huge hill to climb, and you know that it’s not going to be easy.
That’s where Tricentis can step in and take a whole bunch of work off your plate. Tricentis is the expert at test automation, and once you have that nailed down, you’ll find that measuring and improving your test coverage is a lot simpler than you think.
Software test coverage is beneficial, but not a silver bullet
If you only come away from this article with one impression, I hope it’s this: software testing is worth the time and effort you put into it. Test coverage is a useful metric for measuring how well you test your software, but it’s a fundamentally “lossy” metric.
Like a lossy audio compression algorithm, test coverage loses information about the quality of your code when you evaluate it. So, you shouldn’t expect that simply improving code coverage by itself will make your software fundamentally better.
However, by investing in test coverage within your software, you’ll find that you have more confidence in the software that you ship.
If you also spend the time to invest in automating your tests, your team will reap the benefits by creating a strong feedback loop any time that you ship code that breaks your tests.You’ll also be able to use code coverage as a useful indicator to help you find where you’re missing tests and improve that code.
To learn how Tricentis tracks what matters most with automated testing, schedule a demo.
This post was written by Eric Boersma. Eric is a software developer and development manager who’s done everything from IT security in pharmaceuticals to writing intelligence software for the US government to building international development teams for non-profits. He loves to talk about the things he’s learned along the way, and he enjoys listening to and learning from others as well.
