Skip to content

AI in QA: What leading quality experts want every team to know

Tricentis leaders explore how AI is transforming QA from test execution to decision intelligence, emphasizing trust, context engineering, and human judgment at the center of quality.

Jan. 27, 2026
Author: Sarah Welsh

Our goal with the Tricentis blog is to distill insights that help QA professionals navigate the massive, AI-driven transformation happening across the software delivery landscape. To that end, I reached out to experts across Tricentis, from product and services to marketing and strategy, to hear what they’re really thinking about AI in QA right now.

This group brings decades of experience building testing products, guiding enterprise transformations, and shaping how organizations adopt AI. Their perspectives span the full software lifecycle, giving a uniquely holistic view of where QA is headed next.

Yes, AI is accelerating QA. But across every conversation, the same themes surfaced: trust, judgment, and context now matter more than automation volume. The experts I spoke with consistently described a shift from testing execution to decision intelligence, where the value isn’t in generating more tests, but in understanding what to test, why it matters, and whether the AI’s outputs can be trusted.

Another pattern: prompting isn’t the new QA skill — context engineering is. Instead of writing every test themselves, QA teams will increasingly be responsible for giving AI the right information, constraints, and business logic, and then validating the decisions it produces.

What follows are candid answers to the questions shaping AI’s role in QA today: what you should be thinking about, what you should be preparing for, and why human judgment must remain at the center.

What questions should QA leaders be asking about AI that they aren’t asking yet?

Brad Purcell, AI Strategist: “QA leaders focus a lot on AI for QA, but they need to be looking at the QA for AI side of things. That’s how they’ll get to ROI beyond speed. That needs to include things like AI model validation, testing for bias and drift, AI explainability and auditability, data ownership, and setting realistic limits for autonomy with a clear human-AI division of labor.”

Annan Patel, Senior Director, Product Management: “What activities and functions do you feel comfortable letting AI conduct autonomously? Which ones do you believe will require human-in-the-loop? What would you have your people focus on if AI can automate 10%, 25%, or even 50% of their current activities?”

Adnan Ćosić, Senior Product Marketing Manager: “The most important part is not the technology, but what you want to achieve with it. How does the new AI enhance your core business processes. How does it help you serve your customer better?”

Chris Trueman, VP, LiveCompare: “What has been done to enrich the AI specifically for quality assurance and for my most important applications?”

Scott Erlanger, Product Marketing Director: “What are the highest value places in your QA strategy to use AI? Once those are identified, how will you set up a reliable process that helps you get the most out of AI? What guidelines should you put in place to ensure the AI is secure, and that what it generates is correct?”

Jason Wides, Senior Director, Professional Services: “How do we prevent blind trust in the AI output, and how do we ensure QA AI does not align with mistakes the dev AI might be making?”

Harit Patel, Head of Product, Test Management: “Most QA leaders are asking, ‘How do we automate more?’ But the better question is, ‘How do we trust the outcomes AI is producing?'”

Takeaway: Ask harder questions about how AI makes decisions

Across all of the perspectives on this question, a consistent thread emerged: QA leaders need to ask harder questions about how AI makes decisions, what risks those decisions introduce, and where human judgment remains essential. The future of QA will belong not to the teams that move fastest, but to the teams that think the most critically.

What’s one AI capability that QA teams will need by the end of 2026, and why?

Brad: “Your team will need to shift from validating code that behaves predictably to continuously validating systems whose behavior evolves. This will catch not just failures but ‘behavioral anomalies’ that indicate a model has drifted from its intended purpose. Without this, companies will face a growing gap between ‘all the tests passed’ and ‘it’s actually working as intended in production with real users and real data.’”

Adnan: “First, we need to realize that AI will not do the thinking for us, but it will, instructed and governed in the right way, relieve us from a lot of the work we consider cumbersome. So, understanding how to lead and steer AI is an important skill. Context engineering will become the No. 1 skill in that regard, because it supports human AI supervisors to provision the information to AI in the right way through the right tools and workflows.”

Annan: “One capability QA teams will need is AI-powered test libraries – both tests generated by AI and libraries that are curated and maintained for high quality by AI tooling.”

Chris: “In response to any software change, you should know how to answer: What is the business risk? What optimal tests mitigate the risk? What gaps should be closed? How will those gaps be closed? Has risk been sufficiently mitigated. Can we ship?”

Scott: “Without a doubt, teams need to consider how they are using Agentic AI as part of their testing processes, but agentic AI still requires the right testing solutions and agent configurations to achieve results.”

Jason: “Deciding what not to test, providing the correct context and prompts to the AI, and validating its correctness.”

Harit: “QA teams will need risk-aware, adaptive testing that can explain its decisions: AI systems that can decide what to test based on change, risk, and production signals, not just generate more scripts.”

Takeaway: The future belongs to QA teams that can supply context.

Across the group, the capabilities that matter most were not about producing more output, but about improving decisions. The future belongs to teams that can supply context, target risk, explain decisions, and know when not to test. Everyone emphasized skills and systems that make the decision chain faster, clearer, and more trustworthy.

As AI takes on more of the repetitive testing work, what uniquely human aspect of testing will become more important?

Brad: “Adversarial imagination … As systems grow more complex with AI components, the ability to imagine malicious users, unprecedented edge cases, and integration failures across systems becomes invaluable. QA teams will shift from being test executors to acting as organizational red teams. These are the team members that go beyond basic functional testing and ask the ‘what ifs’ that will help inform deeper mitigation strategies. The more AI automates the expected, the more valuable humans/testers become at testing the unexpected.”

Adnan: “A clear understanding of the business domain and why certain tests matter will become more important. If you know what you want, you can easily tell it to the AI (using context engineering skills that I mentioned before). Test planning and choice of test strategy will become more important than building the test.”

Annan: “Humans will always be ‘tastemakers’ in software. That is — having an intuitive feel if the right thing has been built and solves the problem in a way the user wants.”

Chris: “Today, you need to be very knowledgeable about a specific application in order to build high quality tests. AI frees you to get closer to the business and understand the needs of business processes that span multiple applications. Combining this newly learned knowledge of business processes with your QA skills means you can drive the AI tools to build more valuable business process assets that help ship software faster — with greater quality — and drive better business outcomes.”

Jason: “The ability to decide what matters as time will become a bottleneck, and making decisions that AI currently cannot; for instance, ethical questions and risk-based decisions.”

Scott: “Humans will be needed to manually check the results of AI. AI provides amazing speed and productivity to software development teams, but simply accepting its results without checking is a recipe for failure.”

Harit: “Judgment under ambiguity. AI is great at execution. Humans are still the best at deciding what matters.”

Takeaway: Human judgment is the differentiator

As AI accelerates execution, human judgment is the differentiator. Everyone returned to judgment under ambiguity, domain understanding, ethical tradeoffs, product sense, and oversight. AI will do more of the work, and humans will decide what matters and why.

What is overhyped in AI testing, or, what should teams be skeptical about?

Jason, Harit, & Brad: All mentioned “fully autonomous testing” without meaningful oversight. Brad added: “True testing requires domain knowledge, understanding of user impact, and strategic judgment about what matters which requires human context that isn’t in the code. The hype suggests AI can replace test strategy; the reality is it can only accelerate test execution once strategy is defined.”

Adnan: “AI MUST be supervised and governed. For QA this means, don’t let it tell you what to do, you are the one telling the AI what to do. Thinking was never as important as today.”

Annan: “Test case generation and automation script generating are table stakes. Be skeptical of tools that ‘just automate an existing process’ without rethinking the workflow.”

Scott: “Not all tests are the same, and there is a big difference between scripting and more maintainable codeless tests. Many of the test generation solutions I see on the market simply output code.”

Chris: “Do not assume that all AIs are equal. In the context of software quality assurance, there are two dimensions to be mindful of: Quality Assurance and Application Domain. For example, you can go to ChatGPT and say make me a Create Sales Order test, but you will not get a valuable quality assurance SAP asset from this. The AI you choose to support your quality engineering efforts must be ‘enriched’ with testing specialisms and application domain expertise.”

Takeaway: No automation without (meaningful) oversight

The strongest skepticism was reserved for promises of full autonomy without oversight. The panel cautioned against reckless trust, generic AI that lacks domain enrichment, and tools that simply generate artifacts without improving decisions. Pure automation is not a strategy without governance, expertise, and explainability.

What’s the most underrated AI opportunity in testing right now?

Brad: “Impact analysis: Using AI to map the gap between what you tested and what’s actually running in production. Teams assume passing tests equals production readiness, when there’s often a massive delta between test environments and what’s actually serving customers. Also, using AI to detect when tests become obsolete or redundant.”

Adnan: “We focus a lot on building and automating test cases, but I think drawing conclusions from the results is a big opportunity. Learn from test data to find areas in the QA org where you can improve (as humans).”

Chris: “The integration of AI capabilities across the entire software development lifecycle. For example, today there is developer AI, testing AI. But they operate separately. More value will be gained from the integration of these capabilities.”

Jason: “AI as a decision-support system for quality risk. It is great at surfacing trends, correlating failures with business impact, and supporting release decisions.”

Scott: “Using AI to centrally work across multiple different testing solutions to solve bigger problems that need a more diverse set of testing capabilities. Many solutions use AI in very specific, constrained ways, such as analyzing results or creating components of a test.”

Harit: “The real opportunity is not more tests. It is better decisions informed by connecting quality signals across the lifecycle, linking requirements, risk, code changes, execution, production incidents, and customer impact into one continuous signal.”

Takeaway: Better decisions, not more tests

The panel believes the biggest opportunity that AI affords is not generating more tests faster but clearing a path towards better decision-making that improves test strategies, and ultimately, product quality. By connecting lifecycle signals, learning from history, and integrating AI across roles, teams can raise signal quality, reduce noise, and gain confidence faster.

What makes AI rollouts succeed or fail?

Brad: “Starting with a concrete pain point that has measurable business impact, and not with the technology’s capabilities. When you start with AI instead of a real problem, AI becomes a solution looking for a problem. You end up with implementations that are technically impressive but do not move business metrics. This leads to outcomes like more automation coverage that does not reduce defect escape rate, or faster test execution that does not speed up release cycles because the bottleneck was somewhere else.”

Annan: “Having a strategy, as in a clear value statement for using AI with success or evaluation criteria.” Generally, think it’s helpful to look at your processes and workflows and decide to utilize AI piece-by-piece rather than a huge transformation. AI will help in a lot of areas but may have limited value in others. Corporate policies/process/procedures will also impact this.”

Adnan: “The most important part is not the technology, but what you want to achieve with it. Tie initiatives to core processes and customer value so that AI earns its place.”

Chris: “AI has proliferated across the software development lifecycle. But the outcomes vary greatly not just in different phases but also with who is using the AI. Levelling up the entire organization to get the most from AI will require close attention as it is primarily a people change effort.”

Scott: “A lot of time, AI initiatives don’t deliver value because not enough thought or planning was put into what problems to solve and what success looks like. To succeed with AI, you really need to have a clear set of goals, a vision for how AI will help, and measurable success.”

Jason: “Clear ownership of outcomes, because they fail mainly when no one really owns the quality of AI-driven decisions, success metrics are vague or changing continuously, or teams adopt AI because they are told to and not because they really believe in it.”

Takeaway: Tie initiatives to core processes & customer value

AI success is a leadership and change effort. The organizations that win define ownership, establish criteria, sequence adoption, and upskill people. They tie AI to business value so that progress is measurable, and trust grows over time.

Conclusion: The future QA leader is a decision architect

Across every conversation, a theme emerged: AI is not replacing QA. It is redefining the skill set, elevating judgment, and demanding new forms of oversight, governance, and risk understanding. As Jason Wides noted to us, “Teams that adopt AI for risk management, insight, and decision support will become much more productive.” In other words, AI is pushing QA into a far more strategic role.

The QA leader of the future is a decision architect, not a bug detector or gatekeeper of software quality, and the teams that will win are the ones that treat AI as an engine for trusted decisions, not just faster execution.

The QA team of the future will act as a strategic development partner who designs the systems, processes, and AI-driven environments that enable data-driven decisions about software quality, proactively architecting how it is measured, insured, and embedded in the software development lifecycle.

In other words: the role of QA isn’t shrinking. It’s expanding into the discipline that ensures AI-driven software is not only fast, but trustworthy.

Learn more about how your QA job is changing, find ways to build a durable career in the age of AI, or dive into QA trends we’re seeing.

Get to know our experts

Harit Patel, Head of Product, Test Management. Harit leads one of Tricentis’ flagship products and brings deep experience across product management, customer-facing roles, and enterprise consulting. A Georgia Tech alumnus and co-inventor on 10 United States patents, he has built and scaled global, cross-functional teams and driven long-term operational change for Fortune 50 customers.

Annan Patel, Senior Director, Product Management. Annan has delivered multiple AI-driven test management capabilities at Tricentis, including AI agents, AI services, model control points, and administrative tooling. His work focuses on transforming how quality engineering teams operate.

Adnan Ćosić, Senior Product Marketing Manager. Adnan has more than 10 years of global experience in innovation and technology-focused roles. He brings a strategic perspective on how organizations adopt AI and modern QA practices.

Chris Trueman, VP, LiveCompare. Chris brings more than 20 years of experience in SAP lifecycle optimization. He specializes in using data analysis to improve development, testing, and operations for large-scale SAP environments.

Scott Erlanger, Product Marketing Director. Scott has 25 years of marketing and technical experience across multiple industries, including DevOps and Agile automation, quality engineering, semiconductors, and life sciences data.

Brad Purcell, AI Strategist. Brad brings 30 years of quality engineering experience. He is a global speaker and panelist on quality engineering and AI and advises enterprises on how to adopt AI responsibly and build organizational readiness.

Jason Wides, Senior Director, Professional Services. Jason has a background in DevOps and customer success and previously served as Chief Solutions Architect at SeaLights. He works closely with enterprises implementing AI-driven quality practices.

Author:

Sarah Welsh

Sr. Content Marketing Specialist

Date: Jan. 27, 2026
Author:

Sarah Welsh

Date: Jan. 27, 2026

Related resources

You might also be interested in...