Author:

Tricentis Staff

Various contributors

Date: Jan. 09, 2019

Goal: Elevate static image-based control recognition to dynamic pattern-based control recognition in order to make test automation more resilient to changes.

The days of static image-based control recognition are numbered. In Tricentis Tosca, controls (e.g. buttons) can be identified by image patterns that are extracted by interpreting the pixels on the graphical user interface via edge-detection. The entire process is known as pixel-based reverse engineering. Hence, the image pattern defines the control type. The control itself is identified by a combination of an image pattern and its content (e.g. text). The content is extracted through optical character recognition methods (e.g. Google Tesseract). Each control also knows its context on the graphical interface via anchor identification. The control properties are extracted from the image pattern, and (in turn) are used to constrain the automation of certain controls (e.g. scrollbar) during test execution.

As such, pattern-based control recognition makes control recognition much more flexible because the identification of controls during test execution becomes completely independent of the control properties (e.g. size, color). It’s obvious that this elevates static image-based recognition to dynamic image-pattern recognition, and so it makes test automation more resilient to changes by orders of magnitudes.

This pattern-based control recognition approach is already available in Tricentis Tosca. It’s called Any UI Engine 3.0. This engine can scan any user interface on your desktop machines or mobile devices without having to be told how to hook into these technologies. This means that hassles with application access, platform dependence and technical customization are all a thing of the past. One engine for all user interfaces. One engine that works cross-technology, cross-device, and cross-platform.

What’s Next?

The opera ain’t over until the fat lady sings, and so there is more to come. For now, the control type must be manually defined by the user. Furthermore, the user must manually teach the Any UI Engine 3.0 new control types during the scan process. Soon, this won’t be necessary. Our goal is to use machine learning (e.g. convolutional neural networks) to automatically learn the control patterns during test execution and remove these manual tasks. This means that your existing test case portfolio can be used to train this network. Moreover, we will also provide a pre-trained neural network that already knows the patterns of standard controls (e.g. button, table, edit box, check box) for a variety of technologies. This enables you to exploit the epic power of that engine instantly.

Next: Test Failure Prioritization and Healing
Back to AI in Testing Overview

Author:

Tricentis Staff

Various contributors

Date: Jan. 09, 2019

Related resources

You may also be interested in...