In the words of C.S. Lewis, I’m not a tame lion. I was surprised when Tricentis recruited me. I was surprised they didn’t fire me the first week, after I revealed my heuristic and humanistic vision of the future of test tools. But now that I’ve been on the job a few months, I’ve come to believe that Grigori Melnik is serious about innovating in ways that create more powerful testers. He said he was, but you know, talk is cheap and change is hard. In fact, I find myself surrounded by friendly yet tough-minded people who are up for this challenge. It’s been a long time, maybe since my Borland days, since I’ve had the pleasure of rolling with such teammates.
Tricentis has asked me to blog periodically about some of the ideas we are working through (how we see the testing landscape and tester culture, how we are reconcieving the ontology of testing practice…)
This first post is about what I call five grand domains of technical work, and how they relate to testing and test tools. I believe that identifying domains of technical work is necessary “prework” for better serving the needs of serious testers.
I see you rolling your eyes. How can I claim to define and declare the domains of “technical work”? Such a vague idea can be modeled, sliced, and presented in countless ways! I agree. But, the more appropriate title would be much too long: It would be something like, “James Bach’s Proposal for One Interesting Way, Among Many, for Breaking Down the Experience of Technical Work into Qualitatively Distinct and Heuristically Powerful Aspects.” Let’s just go with the shorter one.
Before I present my proposal for the domains of technical work, let me acknowledge some other ways that technical work could be decomposed. We could slice things…
- By industry, since technical work in oil and gas exploration is obviously different than technical work in a manufacturing facility.
- By business dynamics, since technical work in a regulated space such as medical devices is much different from designing video games.
- By the nature of the project, since a greenfield situation is quite different from maintaining old code with a long-established user base.
- By role, since the work of developers is distinct from testers or managers.
I propose slicing by the kinds of things technical people actually do, since that seems relevant to designing better tools. I see five kinds of technical activity:
Social: People relating to people
The key idea of the social domain is communication and collaboration. We build relationships and learn to trust each other. We experience conflict and we work through it. We need test tools that foster – or at least don’t inhibit – the social dynamics of testing. But inhibition occurs when our tools assume that everything worth managing is rendered in the form of text or code. When approaching test tool design, the designers should be thinking about multiple people in front of the screen, talking through what they see. They should be thinking about displays that work well when shared in a Zoom session, and how testing might be facilitated by different people editing or interacting with the same test artifacts at the same time, while chatting over audio.
Analytical: People relating to problems
The key idea of the analytical domain is finding clever answers to interesting problems. We use models, mathematics, and other heuristics. We focus on risk rather than trying to do everything – which means we must know how to think about risk. Analysis usually involves learning and exploring. We need test tools that help us learn about the product, identify testable elements, and design better tests. Anyone can perform shallow testing, which is the kind that finds the obvious bugs. Performing deep testing, which finds elusive bugs, generally requires a strong analytical approach. For instance, we may need to find the smallest set of paths that will visit every interesting pair of state transitions in the product at least once. The right tool can allow us to do that with a fraction of the effort it would take to work it out on paper.
Technical: People relating to technology
The key idea of the technical domain is to make technology do things for us. Among other things, it includes writing code and configuring servers. In the world of test tools, the technical domain is mostly about connecting to the application under test using technology instead of direct human interaction. That usually means writing and maintaining automated checks, so we need good tools for that. While a lot of automation facilitates shallow, broad testing, the technical domain also enables deep testing. For example, a special test fixture may be needed to give access to hidden controls or hidden states.
Administrative: People relating to projects
The key idea of the administrative domain is staying in business by getting the job done. The social people love conversation; the analytical people love finding the perfect solution; the technical people love writing code. Administrative people love crossing things off of “to do” lists. We need tools that help us track the progress of testing, and can also roll up progress reports from different teams that might work in different ways.
Customer: People relating to customers
The key idea of the customer domain is staying in business by making customers happy. This is not just a matter of thinking about what customers might want. We may need to study customers and collect real customer data. We may need to monitor the product as it is used in the field, after we release it. We may need to hire subject-matter experts to work in our teams. But despite all we may do to test deeply in the lab, we will generally find that users in a natural context behave in ways we didn’t anticipate, as well as use data we didn’t imagine.
Operating (and testing) across these domains
We all operate, to some degree, in each of these domains, but many of us are more comfortable in one or two of them. I’m most comfortable in the technical and analytical domain, for instance. I struggle in the administrative domain (I would prefer not to finish anything).
It seems to me that most tools offered to aid testing serve just two domains: technical and administrative. There are lots of tools that make software drive software. I call those user simulation tools. That’s technical-domain stuff. And there are tools that let you write test cases and then track whether you ran them (answering are we done yet?) That’s administrative.
The tools I write for myself are different. For example, I write tools that:
- Generate test scenarios from multiple simultaneous bots interacting with each other to simulate how multiple users might interact with the website
- Take a set of test results and repackage it as a spreadsheet with critical events and values highlighted in different colors for ease of analysis
- Take a set of flowcharts and find a small set of cases that will cover all their basis paths
- Take a Tricentis Tosca recording and make a product coverage outline from it
- Use a Monte Carlo simulation to estimate the website load caused by a given number of users
These are all analytical domain tools. They help me design tests. They help me conceive of the experiments I want to perform on the product. Part of what I want to do at Tricentis is to bring a lot more focus to the analytical domain.
What about the social domain? You might think that Jira and Slack are already serving the social needs of the tester (the social needs that can be served by tools, at least). But I think there’s a lot of unfulfilled need in the marketplace. I’d like us to innovate in the area of visual test strategies, for example. Let’s help testers communicate their work with smart mindmaps and other graphical depictions of complex systems. Then let’s tie those in with other forms of test documentation, such as traditional test plans, or more exotic things like video.
The bottom line
Here is a simple way to think about what I want to bring to Tricentis: I want tools that make testers feel powerful, not exhausted. I want tools to set testers free, not lock them up.
And yes, I know. Talk is cheap.
James Bach is a consulting software tester and Technical Fellow at Tricentis. He is also the founder and CEO of Satisfice, Inc., a software testing consultancy. James has been in the tech field as developer, tester, test manager, and consultant for 38 years. He is a founder of the Context-Driven school of testing, a charter member of the Association for Software Testing, the creator of Rapid Software Testing methodology and Session-based Test Management. He is also the author of two books: Lessons Learned in Software Testing and Secrets of a Buccaneer-Scholar: How Self-Education and the Pursuit of Passion Can Lead to a Lifetime of Success. For more about his work and online courses see https://www.satisfice.com/.