Finding success: Visibility, traceability, and defining guardrails to consistent quality
Organization of projects, processes, and teams was an important first step. Projects in qTest are broken up by functionality and integrated with specific boards in Jira. Because of functionality overlap from one team to the next, test case sharing in qTest allows others to quickly locate and execute existing test cases – both manual and automated. This process reduces redundancy in test cases being leveraged by the multiple teams from one release to the next and makes test case maintenance across teams easier to facilitate.
“The biggest factor was setting standards,” says Acosta. “Being organized when you are dealing with a large team and several large projects helps create more effective management of resources.”
A top priority was implementing specific guardrails to ensure consistency and ease of collaboration across teams. One example was the naming convention for test assets. Because these fields are customizable in qTest, Q2’s test cases now follow a specific naming convention that matches Bitbucket and other repositories, so both testers and developers can easily identify what they need when test cases need to be addressed.
Custom fields and statuses have also helped to evolve the testing process for Q2. For example, in addition to custom fields indicating where the test is run (mobile, web or both), custom statuses are leveraged in order to triage failed test cases. Failed test cases trigger a review to determine the source of the failure.
The team realized failures were sometimes due to server downtime, mobile devices being offline, network issues, or other test environment issues not directly related to defects in the code. This visibility has allowed the testing team to work more closely with various departments within IT to optimize how and when they are executing test cases. This move has significantly reduced false positives caused by environment issues.
“qTest has given much greater confidence with the larger development organization and even more so with the delivery and support organization. I can now SHOW them what I did. It has allowed us to make data-driven decisions instead of gut feel decisions. Data-driven decisions take the emotion out of the conversation to determine how to best proceed forward.”
Integrating test automation with DevOps pipelines
Integration across the DevOps pipeline has streamlined work across the Q2 team. When a developer checks in code to Bitbucket, a job is triggered to test environments for the new build. Once the health check is completed, a Jenkins job is triggered for automation, and all test results are logged in qTest. To keep other stakeholders up-to-date, a workflow triggered by qTest Pulse shares results to a Microsoft Teams channel and sends an email notification to the team.
This process allows for greater confidence in each release. The Q2 team now has a clear understanding of where the release stands and what’s required before release.
“This has been an evolution from having gut feel reactions to having tangible metrics,” Acostasays. “If I go back to the beginning before qTest, we would do a daily standup in prep for releases. I would hear statements like ’we are on track’ or ’we are fine.’ This did not tell me what percent complete we are toward the go live date. Today, I get ‘we are 93% complete’ or ’we are blocked by issue #blank in Jira.’ This visibility into what we are doing creates so much more clarity and confidence for the broader organization.”
Data driven releases
Data is at the center of every decision today, both for release readiness and in determining ways the team can continue to improve their DevOps process. Stakeholders from every level of the business rely on this data to paint a clear picture of where a release stands and measure progress towards goals – including dev managers, senior leadership, scrum teams, and test strategists. Q2 leverages qTest APIs to pull information into the data warehouse every 24 hours. PowerBI is used to build custom reports and dashboards that are critical for decision-making.
Data from each release is leveraged to manage both execution and test metrics. Execution metrics, such as what percentage of test cases passed, how many were based on mobile testing, what percentage of defects were reported in Jira, are used to assess the readiness and quality of each release. Test metrics are used to determine how and where the teams can improve. Examples include: what percent automated are each of my teams (ensuring automation is not declining as functionality and test assets grow), what features need to increase automation, or where are we finding the most issues.
The ability to pull critical data from a central location across teams with qTest has helped facilitate this process and allowed for the continued growth of Q2’s innovation and quality.
- Custom statuses allow failed test cases to be triaged in real time – reducing the number of tickets that need to be addressed by development
- Test cases are shared across multiple teams to reduce redundancy and work
- Standardized test management process across teams
- Integration into CI/CD pipeline allows for rapid results reporting across multiple channels
- Multiple automation tools and frameworks integrated including Jira, Jenkins, Bitbucket, Selenium, Appium, and homegrown
- Data is pulled from all projects every 24 hours to allow for custom reporting and dashboarding in PowerBI for data driven decision making
- Increased confidence in testing and improved quality of releases