Editor’s Note: In the spirit of the famous advice columns of the past, one of our Tricentis Tosca experts has started their our own column: Dear Dr. Tosca. Have questions for Dr. Tosca? Send them our way in the comments!
Dear Dr. Tosca,
I am part of a large Quality Assurance division with about 80 colleagues, including test managers, testers, operation and release managers for the test environments, etc… We are a large enterprise financial company, and take care of more than 250 systems, most of which are connected via an Enterprise Service Bus while others are connected directly.
We have four staging environments and have a strict release management: six releases a year, every second month. Each deployment has to be announced at least one month before, and dependencies have to be analyzed and approved. Since most of the test execution is done manually, the test period of two month is often too short because the systems are really low quality when they are first deployed to our test environment. It takes us a lot of time to identify the root courses of the defects. My colleagues and I are already so frustrated – we often have to repeat planned test cases more than three times in each test period.
Our idea is to start a new project in testing, where we automate the test execution of 35 of our critical core systems, covering the full system’s functionalities.
When we started however, we realized that we are even more dependent on these 3rd party systems, internally and externally, than we first realized. These systems are often unstable, not available, or deployed in a different version within our first staging environment. Even if these systems are available, it takes a lot of effort to maintain and prepare the test data, and prevent other teams from using the same data.
Is there any other solution for us? What is your recommendation for starting an automation project with a lot of system dependencies?
Mutinous Test Manager
Dear Mutinous Test Manager,
Thank you for your question, I think what you describe is actually quite a common situation. And the good news is: yes – there is a solution!
In Enterprise environments, systems require (on average) about 33 other systems for executing full integration or E2E-Test cases. Let me ask you this: which systems are you responsible for? In this case, let’s say you are responsible for the Online Banking Portal, one of the most important entry points for your customer.
Of course you would want to make sure that this system works quite well, but your Online Portal relies on your Banking software to show the actual account balance, transactions, and credit cards details, so you have to have this and additional information gathered.
Test data management is a powerful tool, but, as in this example, you would have to prepare a lot of test data for each test case in all the 3rd party systems. And you have the extra effort of managing test data in creating and executing the test case.
Even if you have managed to set up the test data, your test execution still relies on the availability of the other systems. Imagine: you have set up the test cases, included these test cases in an automatic test execution overnight, and in the morning 60% of your test cases have failed. You start identifying the root cause and after hours of investigating you find out that an update was installed to your banking software, so it was not available at the time.
So, the best idea in automation projects is to directly start with Virtualization in parallel, right from the beginning. On the one hand you are defining your test cases for GUI- and API-test, and on the other hand you are identifying the services that are required for each test case.
Either you have good documentation and a process description of your system or (and this is what I have experience more often), you will use the recording functionality that modern Virtualization tools offer you. You start with recording – while you are executing your test case the Service Virtualization tool identifies, tracks, and stores each request and response your Online Portal will send to any other 3rd party systems: request actual account balance data, display credit card information and customer data, etc.
These requests and responses (we call them RR-Pairs, because there almost always one response to one request) can now be easily converted into Service Virtualization, and voila! You are already able to execute this test case against the virtualization.
I recommend setting up your Virtualization parallel with your test automation process to reduce effort in your test automation setup. That way you won’t consume test data while setting up your test case, and won’t need to set up test data management in the 3rd party systems at all. After you have recorded and set up test cases and virtualization, you can easily reuse all those structures and data to cover all the test case variants.
Another benefit is that you won’t lose time waiting for 3rd party systems to become stable or available. In my last project, waiting for other systems frequently prevented us from achieving our automation timeline. The automated test cases were built, executed, and then failed because the CRM system was not available. You can imagine how annoying it was!
You can cover your application risk coverage and save time by using API and GUI testing, and by using the identic data and tool basis for the virtualization. In my last project it took about 2.5 hours to set up an automated test case, and then an additional 30 minutes to convert the recordings into a virtualization. From there the test case could be executed again and again. Great benefit, right?
Service Virtualization can do much more for you, and of course can cover all different test variants. It can also simulate complex business scenarios and all these steps are simulated. No need of any 3rd party system or test data in these systems.
So I would recommend that you release your system dependencies with Service Virtualization. In my opinion, Service Virtualization and test automation are always meant to go hand in hand…