Skip to content

SeaLights MCP: Enabling AI-augmented testing at scale

Tricentis SeaLights MCP enables AI-driven QA workflows, enhancing test coverage, automation, and release confidence at scale.

Sep. 02, 2025
Author: Tricentis Staff

As modern software delivery processes accelerate, QA teams inevitably feel pressure to maintain quality at scale. SeaLights addresses this challenge by providing deep visibility into test coverage across every stage of the software development lifecycle. It helps teams identify untested code changes, optimize execution, and deliver confidence at release time.

Now, with the release of Model Context Protocol (MCP) from Tricentis, SeaLights users can integrate its robust, industry-best capabilities directly into AI workflows using large language models (LLMs) and other AI agents. This revolutionary update introduces a programmatic interface for intelligent, autonomous agents, offering a more flexible, scalable, and automated way to extract insight from SeaLights.

What is MCP?

MCP (Model Context Protocol) is a standardized remote API layer designed for AI systems. It allows LLMs to discover and invoke backend tool functionality as a structured set of “tools,” without needing prior schema knowledge or hardcoded API contracts. In effect, it positions Tricentis products, including SeaLights, as services that can be used in LLM-based automation or interactive agents.

Jason Wides, Tricentis SeaLights Chief Solution Architect, explains it like this: “MCP is an API-driven system that gives AI models access to a set of tools. Each tool is defined with a name, description, functionality, and a clear input/output schema. When an AI connects to the MCP, it can explore what tools are available, understand what each one does, and use them — all through natural-language prompts.”

MCP builds on the function-calling capabilities introduced in modern LLMs (such as OpenAI’s tool-calling feature), where AI can request execution of structured functions in real-time to retrieve external data or perform operations it wasn’t trained on.

Why MCP now? A SeaLights perspective

This protocol arrives at a critical inflection point for QA teams.

Bar Kofman, VP Marketing at SeaLights, adds that “Release velocity is increasing. Application complexity is growing, At the same time, testing teams cannot afford to compromise on quality. MCP helps reduce the manual overhead of gathering data from multiple sources, APIs, dashboards, test management tools, and allows AI agents to orchestrate this work automatically.”

“MCP allows AI tools to interact with SeaLights through prompts instead of hard-coded API calls. It simplifies integration for teams building their own AI-driven test tooling.”

The SeaLights MCP exposes the same capabilities as the SeaLights public API, but in a format that AI systems can introspect and use dynamically. DevOps teams can prompt an AI assistant to retrieve sophisticated, remarkably granular coverage for modified code, identify tests to skip, or generate intelligent test execution plans, without navigating the SeaLights UI or scripting calls manually.

Managing tool complexity in high-fidelity APIs

One challenge in adapting MCP for SeaLights lies in the breadth of its existing API. SeaLights includes nearly 100 distinct endpoints—each of which must be represented as a separate tool in the MCP schema. This volume of tools introduces potential issues when used with certain LLMs, including token limits and performance degradation.

To mitigate this, the SeaLights team introduced two optimizations:

  1. Selective tool exposure – Administrators can configure which tools are included in the MCP manifest at load time, allowing teams to reduce surface area based on specific use cases.
  2. Dynamic tool filtering with SeaLights AI – When used in combination with Tricentis’ Qtonomous platform, MCP toolsets are filtered dynamically at query time based on prompt context, reducing overhead and ensuring efficient LLM interactions.

“Instead of sending the full toolset on every call,” explains Wides, “SeaLights MCP evaluates the intent and includes only the relevant tools. This minimizes token usage, improves performance, and avoids LLM limitations.”

Seamless integration and deployment

The SeaLights MCP is designed to be secure, extensible, and deployable within customer environments. It supports all standard MCP deployment modes defined by the protocol, —including local installation to reduce exposure risk and accommodate internal AI tooling strategies.

Wides emphasizes this flexibility:

“Customers building their own AI assistants or agent workflows can integrate SeaLights simply by registering our MCP. There’s no additional cost, and no need to rewrite or extend the existing API stack.”

This makes the SeaLights MCP particularly useful for enterprises pursuing AI-assisted testing at scale or embedding SeaLights data into broader quality engineering pipelines.

In fact, the introduction and integration of MCP into SeaLights signals a broader shift toward intelligent, AI-augmented testing infrastructure from Tricentis. It is part of our constant push forward in artificial intelligence with agentic test automation and remote MCPs. With structured access to SeaLights insights, via natural-language interfaces and autonomous agents, teams gain faster decision-making, more accurate test targeting, and reduced manual overhead.

Want to learn more about Tricentis SeaLights?

Watch our demo video to see how SeaLights helps QA teams optimize test coverage and release with confidence.

Or schedule a demo with a Tricentis expert to learn how SeaLights fits into your testing strategy

Author:

Tricentis Staff

Various contributors

Date: Sep. 02, 2025
Author:

Tricentis Staff

Date: Sep. 02, 2025