Wolfgang Gaida and Markus Bonner, Release and Test Managers at insurance software company Twinformatics, share their expertize in successfully implementing automated and performance testing for software that millions of customers depend on. These long-time automation experts discuss developments in the pipeline such as extending their KPIs and utilizing virtual machines, and highlight the importance of step-by-step digital transformations. The transcript below has been edited lightly for clarity and brevity.
Emma: Greetings listeners, it’s your host Emma. We have a great interview lined up with our guests also based in the currently wintery but wonderful city of Vienna. With us today we have Wolfgang Gaida and Markus Bonner, both Release & Test Managers at Twinformatics. A very warm welcome to the both of you onto the podcast.
This is the second episode of our Insurance Testing Innovation series, and we’re chatting with leaders who are leading the charge in the testing space. We’ve worked with Twinformatics for well over 10 years now; they are the IT provider for the Vienna Insurance Group, one of the biggest insurance companies in Austria. You are making software for 25,000 employees and millions of customers; if you’re tuning in from Vienna, chances are you are using their software for your insurance needs—me included.
Let’s start from the beginning. You joined us in 2008, and since then, you have been innovating and testing for a substantial amount of time, and you’ve had fantastic results.
We’ve seen you increase test coverage from 10% to 80%, with ten times greater system coverage. How have these changes impacted the quality of the software that you deliver?
Markus: Actually, we were using the tool already since July 2007, so it’s already been for a long time now. Increasing the test coverage significantly increased the trust in the automated test cases that we provide and that we regularly run. This is a benefit for the business colleagues, especially since they could switch their test focus from manual regression tests to testing the new functionalities. This allowed us, on the other side, to increase the volume of new software delivery significantly.
Emma: It’s great to hear then that Tosca has helped you scale up your testing to such a great extent, and that in turn it’s really improved the quality of your systems. In the insurance space as well, quality is incredibly important when your customers really depend and rely on the systems.
As your customer base expands, would you say that there is an increased demand for quality than ever before?
“When we opened our systems from being used only internally to our customers, of course the quality demand significantly increased. When a system doesn’t work internally, and when it doesn’t work for our customers, then it’s really bad on the market. So it was really important to increase the software quality for the insurance business.”
Emma: Perfect. It’s awesome to see that Tosca has helped you get there as well.
Our listeners may well be aware that Twinformatics won the Tricentis Trailblazer Customer Award last year, which recognized you for expanding continuous testing into that non-functional testing to further accelerate your release cycles with greater confidence.
How has this more advanced performance testing instilled this conviction? How has that been for you to release with more confidence?
Wolfgang: Thank you once more for the award. This was a big honor for us—for Twinformatics, and for me. When I look back at the transformation of load and performance testing, I must say that we have also transformation in the acceptance of the performance testing results.
Let’s have a look at the beginning: we started three years ago with Tricentis NeoLoad. This means results and reporting were generated by NeoLoad, and we had no automated approach. During the transformation, we set up a performance test lab—which is now fully integrated in our system landscape—starting with the archiving system, the Bitbucket and GitLab. Then integrating with the CI tool for the automated non-functional test.
We have also implemented the Tosca integration, meaning we have Tosca test cases and pen test cases that can be recorded by NeoLoad automatically. We have a load test when we have a Tosca test case in, let’s say, one hour. We also have NeoLoad web installation on-premise and an integration with this NeoLoad web with the JIRA component.
“We implement these daily automated load tests, and we have a new rule in our process that the execution of load testing before a release to a productive system is mandatory.“
Wolfgang: This was a long process of learning. At the beginning, we had problems. Developers accepted only functional errors and defects for functional errors. They then also became aware that nonfunctional issues are important. For example when we have long response times, we have unstable software where we have some unexpected, meaningless errors—error messages to the end-user. Later in the process we saw the advantage and the need of performing testing, and the developers also made corrections based on non-functional defects. Cases were analyzed together and we had special meetings for load testing.
“We learned that load testing especially is a team sport, where each member of the project or testing team is involved. We also learned to evaluate the results of the automated performance tests, and we have the mindset that slow is the new downtime.“
Wolfgang: This was a long process of learning. At the beginning, we had problems. Developers accepted only functional errors and defects for functional errors. They then also became aware that nonfunctional issues are important. For example when we have long r
Emma: Great. It’s awesome to hear that you’ve implemented this new rule that performance testing is mandatory, and that you all are taking a closer look at the non-functional requirements, improving the performance of the systems and the visibility that Neoload and the Tosca integration are offering. That is clearly taken on board across the company with the meetings, and making that the golden rule is really good to see.
The direction that we’re heading in is the performance engineering even; this is a term you hear more and more. It should be embedded in the testing lifecycle. So it makes a lot of sense to me that you would combine Tosca with NeoLoad to satisfy your testing needs.
I’ve heard you mention as well, before, Wolfgang, that with this, you’ve gone from executing four test cases a year to about 365–so that’s once a day, a 90 times increase covering so many more systems. It’s really great to see that success across the board. In terms of what you test, I understand that you are automating load tests across both web applications and SAP, and we find that many of our customers are in the same boat.
Predominantly, you are testing web UI and NetWeaver under SAP. I would love to hear any lessons that you’ve learned from testing this software, particularly in the insurance space.
Markus: One basic learning for us was that we needed a dedicated skilled team in order to build up automated test cases that mean something for the business. The second learning for me was that we need to have easy-to-maintain test cases. And this was also the reason why we shifted from JMeter to NeoLoad for the load and performance testing.
Wolfgang: Our main focus for now is the testing of web applications. We also tested SAP applications, but only one application for claim management. Now, SAP is in the queue.
“We had several lessons learned during our test process. In the preparation phase, when we started recording the planning of the test cases, we needed at least a storybook or an automated Tosca test case that is well described from end-to-end. We wanted to minimize the effort during the scripting.“
Wolfgang: In this storybook we also need to have a description of test login data which can be used. We do not like to do trial and error during load testing.
The execution phase is organized as an online meeting, where the necessary stakeholders take part/ They are responsible then for monitoring the systems, collecting tracing, and to make the consolidated report in the analysis phase. In the results meeting we have a discussion of the results and make some measurements and next steps. And one important statement is that load testing is a team sport for us.
Emma: Awesome. So it’s all hands on deck then.
Are you taking on an Agile scrum framework with these meetings, getting teams to check in regularly?
Wolfgang: These meetings are not so regular; they are more or less on-demand. We have a load test, execute it, and then we have the result meeting to decide what next steps to take.
Emma: That makes sense. It’s interesting that you have this no trial and error; that you want to set those strict requirements and you’ve learned that it is important to have from the beginning. Often testing has historically been an afterthought, but it’s good that you’re aligning the testing requirements early on so that your processes can move faster.
Wolfgang: Yes, of course.
Emma: So clearly you have a really good setup there at Twinformatics. You have the people, the tools, and the processes in place.
Part 1 outro
With over 15 years of test automation experience under their belt, there’s is a lot to learn from Wolfgang and Markus! The trust in this automation that’s accumulated over the years has bought about increased test coverage, and therefore increased confidence in their software delivery. With Twinformatic’s software serving a significant customer base, it makes sense that performance testing is a critical part of the puzzle, especially in an insurance context where customers really do depend on their services.
Emma: We’re relatively early on in 2022.
What new initiatives do you have planned for the year?
Markus: We have a few topics in the pipeline that we want to achieve in 2022.
“One important topic is that we want to pilot Tricentis Live Compare to focus stronger on details and functionality that is really used by our business colleagues. Also so that we focus stronger on the parts of the software that have really changed within our landscape. We expect to see a lot of benefits and an increase in the quality of our software testing.“
Markus: We want to implement new scheduling for the automated test cases. We will do this with Jenkins and/or other new technologies. Another very interesting topic for us is the self-healing functionality that you provide now, and we want to try this out to see how much that can help us in the maintenance of the test cases.
Another important topic is the new reporting. We have planned meetings with the Tricentis team soon, and we want to see how it works and how much we can utilize it. This is for sure an important topic so that when we have good test results, we can present them in a respectable way. Unfortunately, we do still have a lot of cleanups to do of our technical tapes in all the automated test cases, and we hope to complete this task in the first half of this year.
Emma: Excellent, so you have lots ahead then. It’s great to see that you’re going to try Live Compare for that smarter impact analysis, and you’re clearly very on the ball with all the Tosca features. It’s awesome to have super users like yourselves to try them out.
For our listeners, DEX is Tosca’s Distribution Server, which is there to speed up your testing by automating across multiple virtual machines in the cloud. So that’s an exciting adventure to go on. It also makes sense that you would also be cleaning up your library of existing test cases.
Wolfgang: I have also some plans, especially for load and performance testing. First of all, we will switch to new virtual machines in our test lab. We’ve got a new NeoLoad web server for productive environments—a new stage —and we’ll implement also a Jenkins integration, which will be also used by Markus’ test topics. These are only our infrastructure topics. We have new performance tests for new campaigns in the queue. For example, new software in the bank software integration and customer portal, and we will also extend our internal process after performance testing. I heard there’s a tool which can collect comparisons of test results from NeoLoad.
“We often have the question; how good is our actual software compared to any baseline, or to the last official release, or the other versions? There’s a tool which can handle this and answer this question quite automatically. And in 2022, we will integrate this tool in our system landscape, in our performance testing.“
Wolfgang: It is also important that we work out further KPIs (key performance indicators) for the SLAs, for response times, and we will continue the specification of the non-functional requirements. Those are our topics in the performance landscape.
Emma: Lots coming up then. You’re clearly always looking for the next innovation, which is great to see.
For the load performance testing, you mentioned these new virtual environments; will they allow you to virtualize your systems before they go into production?
Wolfgang: Yes. Some systems are virtualized and some are physical. It depends on the system.
Emma: Great. I’m looking forward to seeing how the rollout goes for all of these initiatives.
In 10 words or less, what advice would you give to others working on digital transformations in a regulated field like insurance?
Markus: What’s very important is to build a relevant business risk-weighted automated test set for functional and non-functional topics, and to include this into your deployment processes, building this with dedicated automation experts. If you don’t have them, get the right partners to do that.
Wolfgang: I would say start the transformation, make it step by step—no big bang—and use NeoLoad.
Emma: I like that three-step process with a big finish with NeoLoad there. Different angles but with similar goals.
If you could change one thing about the world that we live in today in application development, what would that be?
Wolfgang: I would say that business analysis should be more professional.
“There should be more of an attempt to find the errors in earlier phases, because they are cheaper. On a different topic, the system integration tests should be done independently by an independent team.“
Markus: My thoughts on this; artificial intelligence takes over software development based on documented or scanned business needs. I think that automated tests should be generated and risk-weighted based on documented or scanned business processe. There are still improvements to be made here.
Part 2 outro
It’s abundantly clear that Wolfgang and Markus have an excellent overview of Twinformatic’s testing landscape, with continuous improvements and developments in the pipeline. As well as software requirements, from implementing further KPIs to switching to virtual machines, the people aspect does not go unnoticed – that working with the right people in the right way is key to success.
If you’d like to see what Tricentis Tosca and Tricentis NeoLoad can do for you, check out our products to find out more and trial them today.
Check out the latest podcast episodes for more insights from thought leaders like Wolfgang and Markus.