Your transformation toolkit

Advance your enterprise testing strategy with our transformation toolkit.

Learn more
Guide Masthead

Guides & Insights

BI/Data warehouse testing: Your top questions answered

Getting started with Tricentis Data Integrity: Why, when, and how

So your organization has launched a BI/data warehouse test automation initiative—and you’re tasked with leading the charge. Determining how to get started with BI/data warehouse test automation can be quite a feat for “traditional” testers and data quality professionals alike. Just consider some of the complexities involved…

A data warehouse is built by importing data from many different sources (apps, databases, endpoints, cloud-based apps providing data, etc.). All these data points get aggregated, transformed, and managed in a data warehouse, which can be on-prem or cloud-based. Once the information is actually formatted and normalized in the data warehouse, your BI app accesses that data structure and presents views of that data, allowing business users to make informed business decisions.

Now, imagine that someone discovers a data integrity issue in one of the reports—and you need to unravel exactly where and why the problem arose.

Is the report rendering incorrectly?
Is there a problem in the report logic—or the business logic around it?
Is there a problem in the data feed into the BI system?
Is the data being drawn from the appropriate data warehouse instance(s)?
Is there an issue in the load, transformation, extraction, or source of the various data feeds driving data into the data warehouse?

Your test automation should not only help you zero in on the source of the problem, but also help you prevent such integrity issues from reaching the reports in the first place.

As you can imagine, this can be quite a challenge. But although you might be the first in your organization to lead a BI/data warehouse test initiative, you can stand on the shoulders of giants. Here are the most common questions that organizations ask as they advance BI/data warehouse initiatives—and answers that share the best practices they have developed and optimized over the years.

When and where do I start BI/data warehouse test automation?

Working in the BI space means wrestling and managing thousands upon thousands of constantly changing tables and columns of data to help your business run the best that it can. This is the case whether you are working with Hadoop systems, large containers on an Azure cloud, on the Amazon S3 platform, or even with local relational databases.

Diagram of how a BI space is typically set up, from data sources through to reporting and analytics

Across all these setups, keeping precise track of changes is a challenge, whether it’s a new table, a changed or renamed column, or something more subtle like the degree of precision changing in a column. But any of these changes can greatly affect your tests, resulting in lots of time wasted debugging tests trying to pinpoint what changed, and where it changed between the source and the target.

Where in the end-to-end data flow do we start automating?

Ideally, you would automate from start to finish–that is, from the data sources to the reports and dashboards. In the real world, you want to start with the easy wins and start automating where there are opportunities to gain early proof points and successes. At Tricentis, we’ve helped clients start successful BI/data warehouse test automation projects at every point within the data warehouse and BI space.

Which BI/data warehouse tests do we start with?

Always begin with metadata. Metadata tests require almost no maintenance but can answer key questions and provide impact analysis, which can guide a tester’s investigations. Metadata tests will help you answer questions such as:

  • Is my platform stable? Are the tables and columns chosen to be part of the system under test staying the same each time the tests are run, or has a change occurred?
  • If a change has occurred, which of the existing tests are affected by the change?

Metadata testing is also very powerful when migrating tests from one environment to another. For example, if you’re planning a migration from a developer environment into an integration environment, you want to make sure that changes made in the developer environment propagate to the integration environment. Automated metadata tests are great for verifying the characteristics of new environments quickly and easily.

During the migration, you want to retain a clear idea of the requirements for the tests you want to run, and be able to understand if these requirements remain the same in the different environment. This provides a stable platform which allows you to know that any test errors or failures are based on actual data and not the metadata structure.

What types of BI/data warehouse tests should we cover?

It’s useful to divide tests into three categories: vital checks, transformation tests, and profile tests. Within each category, the environment you’re in will dictate what the mix of tests should be.

Vital checks

Like the metadata tests mentioned above, these types of tests help to uncover what is or isn’t worth testing. These tests are critical in helping to prioritize your efforts, and are also easy to apply and maintain. For example, after a data move, vital checks will look at:­­­­­

  • Metadata
  • Completeness
  • Uniqueness
  • Referential integrity
  • Null/default values

Vital checks can flag whether the metadata or system structures have changed, if data has been copied incorrectly, duplicated, or if referential errors have been created where the complete records were not copied. These are important checks to run before moving onto more complex tests.

In developer environments, vital checks would be run as soon as there are any changes, but they would not be as necessary as you approach production. It is fairly unlikely that you would run the completeness test (say, counting rows across all 18,000 tables of a data warehouse system) in production, and it would not be necessary on a regular basis.

Transformation tests

These are the tests that can be written annually to verify that the logic you are implementing in data transformations (e.g. moving data into fact tables or dimension tables) is resulting in values that you expect. They would check that data is being taken in as expected, and resulting in values with no unexpected nulls, missing values, or overlaps.

These tests are heavily relied on when developing the data transformations, and can be reused as part of the regression suite any time there are changes. They are also critical during integration, as they ensure that existing systems are not being affected by changes flowing down from the other systems it relies on (lookup tables, reference tables, or source systems). The regression suite monitors whether the transformations are ticking along.

Profile tests

Data profile tests

Data profiles check that the values in the data are correct. Once you’re at a point where the metadata is correct, the records have all been copied, and all the referential records are present, you want to make sure the data itself is valid. If we take a hypothetical motor vehicle insurance claim example, we want to make sure that every claim is associated to a valid and active policy. If we have a claim for a car, it’s no good if it is linked to a motorcycle policy.

Data profiles test the business rules within the system, and are critical during the development cycle of any project. This is especially true if you are merging data with lookup tables and merging data between multiple systems.

Trend profiling tests

These tests aim to capture the behavior of the system from a data perspective, looking for indications of normal or expected performance, with some degree of variance. In the earlier insurance claim example, trend profiling tests would verify that we are receiving 100 new claims per state per day as expected, or that 80% of those claims are moving from a new to paid state over the course of a business day.

When the results of trend profiling tests are out of range, we can look for external events such as an issue in the payment processing, or dig further into the data.

Demo: How to measure test coverage

This short video introduces the Tricentis Data Integrity interface, starting with a requirement structure, and shows how tests are built out using building blocks that represent the data quality dimensions that need testing. Tricentis can visualize the different layers of your requirements, as well as the level of risk coverage and execution state.

How do I minimize test maintenance?

Test maintenance is minimized when core design principles are applied consistently during test creation. Be clear on:

  • Your requirements, what you want to test, and what aspect of the quality or action you are addressing
  • The business rules that are driving the transformation. What are the possible combinations of the incoming data, and what are the different outputs you are expecting?

By doing this, you separate what you’re trying to test from how those test are going to be executed. It’s useful to imagine that you’re new to a team, and asking yourself whether you can easily define what you are looking for, and how to tell if the implementation is working.

Demo: Testing ETL logic with a vehicle insurance example

For this video, we return to the example of the motor vehicle insurance company which needs to select the correct policy class depending on a vehicle’s size or class. It illustrates how to test the transformation logic without falling into the trap of replicating tests within the ETL tool, which can lead you to verify that bugs are being executed, without getting to the logic itself.

Notice how separating what needs to be tested from how the test is going to be executed makes maintenance much simpler down the road.

How do we manage the tradeoff between automation and manual testing?

Firstly, you may not have the resource to run all the possible automated tests. That’s where risk-based testing comes in. Different requirements can be assigned different weightings to represent the risk levels associated with each test failing. Tests with the higher weightings can be automated first, so you can prioritize within your test portfolio.

As you make the tradeoff between what is automated and what isn’t, it’s important to go beyond a simple ROI calculation of the time and cost of automated vs. manual testing. For example, regression testing is critical for alerting you when something changes elsewhere among interlinked systems. Even if you are not actively developing that other system, you still need to ensure that it is running as intended if you want to ensure that your end-to-end process continues to meet expectations. Having a regression set that can be run regularly can save a lot of time and grief in the long term.

This sort of automation mapping is a very valuable exercise in itself, as it draws out many questions about the business and testing logic, and can shed light on bugs or processes that need further evaluation.

The testing that remains manual is what we refer to as exploratory testing, where there are questions and lines of thought that cannot be captured by software. The creativity of a human tester is still needed for less procedural situations, and to find more hidden risks that require exploratory testing to uncover.

What are the common challenges that arise over the course of a typical BI/data warehouse initiative?

Understanding of risk and coverage

We have seen multiple aspects of the BI process being tested through scripts, from ETL processes, to the collection of data for BI reports. The scripts are very well written, but often aren’t well understood beyond the original tester, especially in the wider context of what it is addressing, and the coverage it provides within the overall system.

Over time, BI projects come to rely on a big pool of scripts (or possibly one joint script), with no detailed knowledge of what is being covered, and what is left uncovered. From a test management perspective, it’s the impact of what is not covered which could have severe consequences, and recovering that knowledge from teams that use scripts is a big challenge.


Many teams are responsible for specific parts of the BI/data warehouse process. We’ve seen set ups where one team extracts data from a source system into something running on Hadoop, a second team imports the contents of the Hadoop table to place into staging tables, and a third team creates the dimensions and measures of the inside layer.

In this kind of scenario, these teams typically do not have a shared environment. Without a pooled set of resources on the testing and test formation front, you end up with overlaps or gaps in testing where things fall through the cracks between teams.

Data quality

The most common problem organizations face is the quality of the data that is entering the data warehouse in the first place. It may be logically wrong (which would be caught by the profile tests mentioned above), or be a quality issue at the source, where you don’t necessarily have control. Teams in siloed organizations will also take inconsistent measures with this quality issue, making it hard to tell what was cut from project to project.


BI/data warehouse testing teams can get too focused on checking whether their ETL processes work, and forget to ask whether it will continue to work. Testers in the BI space are capable of creating some brilliant SQL statements, but far too often these end up being abandoned once the development phase of a project is complete. These forgotten statements enter the BI ecosystem where it is used by other systems, which are themselves being adjusted and changed.

The unintended effects of these statements can then flow through the system, only getting detected when a user feeds back that the values in their report are incorrect. Catching errors at this late stage makes spotting the original issue more complicated. Sometimes however, it’s not the users that catch these errors, but regulatory bodies that will query the values in your reports.

Why Tricentis Data Integrity?

Today, the concept of data and data cleanliness is often discussed in relation to some specific database. Consequently, tools that address data cleanliness or data integrity are built into the database itself, around vendor proprietary technology. This presents problems once you step out of a single database, and issues with data integrity will turn into issues with the integrity of a business decision. Having a comprehensive, end-to-end process that ensures the correct data reaches the business user is critical.

Tricentis Data Integrity was designed to provide a powerful decision integrity platform, with source agnostic end-to-end capabilities that allow you to validate every import, aggregation, transformation from the source databases through to the BI reports. With the types and volume of source data only set to increase, getting started on your BI/data warehouse testing now is essential for protecting the continued integrity of your business data and business decisions.

Continue Reading

Please register for access