Tricentis’ exploratory testing guru and Agile Evangelist, Ingo Philipp, recently presented a popular webinar on transforming Agile processes in regards to exploratory testing. The webinar (which you can watch on-demand here!), sparked several follow up questions on the applications and nuances of exploratory testing.

In response, Ingo has gathered the best of the questions and provided his candid answers here.

 

Who should be part of an exploratory testing session? Just testers?

No, not necessarily! Testing shouldn’t be regarded as an isolated, disconnected island in a company’s landscape. This is especially true in the age of Agile, where we all strive for continuous delivery (with continuous testing as a central component). Testing is a responsibility for the entire organization, because we all have the same finish line! So in our projects, we constantly try to motivate and to convince people from other agile development teams (tester, developers, product owners), other departments (business, operations), and even customers (If you can access them) to participate in our exploratory testing sessions.

But, the question is: How do we motivate these people? Well, mainly by fostering collaboration. This is not a one-way street. Whenever we receive support from other teams or departments, we give it back later on by helping them out with their testing issues and  join in their testing sessions. The reason why we do all of that is diversification. That’s one important key to testing success.

One last thought: We organize “blitz test sessions” to facilitate diversification in testing. In these sessions our testers are inviting e.g. developers from all over the company to participate in a time-boxed test session (such as 45 min). The more people the better. And, at the end, the best tester (decided by a jury) then gets a cool “blitz test winner” shirt. This way testing becomes a little bit of a competition – where everybody wants to be the one to find the most issues.

 

We’ve tried some exploratory testing tools that are just glorified screen capture tools. Is there anything else that can help with the complete process?

Sure, there is. What most of the tools out there are missing is comprehensive support for session-based test management, and the capability to automatically record your test actions across various different technologies and platforms. In most cases, tools flagged as “exploratory testing tools” are not much more than just screen or video capture tools, and so these tools (as you pointed out) fail to enable the user to walk through the entire process of exploratory testing.

With this in mind, we at Tricentis developed our own tool to fill the voids. First, we developed these tools, because we needed them. Now, they are available to anyone. The tool enables you to perform fully-featured session-based testing. You can invite multiple testers, write charters, define and assign testing tasks, and time-box sessions. Then you can easily execute your sessions. The tool enables you to record your test actions automatically with annotated screenshots and videos for any Windows technology like HTML, Java, DotNet, SAP, etc.  Finally, you can monitor the session status and share detailed test results to facilitate collaboration and fast feedback.

That’s (in a nutshell) what Tricentis Exploratory Testing can do for you. So, have a look at it! This tool comes with our enterprise solution (it’s called Tricentis Tosca), but is also available as a native add-on for Atlassian JIRA. If you are interested, let us know. We are happy to help.

 

You outlined possible session setups, and which decisions need to be made to get started. That’s sounds pretty exhaustive. How long does it take you to prepare all that, how often do you run these sessions, and how do you pair this with regression testing?

Oh, no. It’s by no means exhaustive. It takes us about 10 minutes to set up a session, i.e. to walk through the recipe of the seven questions I have proposed. The whole point of exploratory testing, in any fast-paced development, is lightweight planning. We don’t want to spend hours to plan our next exploratory testing endeavors. That has to happen in units of minutes.

Regarding your second question: How often do we run the sessions? In most of our projects we align automated regression testing with structured session-based exploratory testing. On average, this happens about two to three times a week. This means, that whenever the agile team sees the need to execute the regression test portfolio by formal test automation, we set up, execute and review one or more exploratory testing sessions. In these sessions, the goal is to have a broad variety of different people from testing, development, business and operations participating.

Of course, you should do exploratory testing in the meantime. Never stop doing it! But these sessions are usually composed of just two people. For example, we perform “pairing sessions”, where a developer and a tester are paired. The developer does the actual testing, and the tester gives tips and tricks on how to become more effective. This helps the tester to get a better understanding of the developer’s mindset. And then you switch! After this education, this pair then starts writing a test recipe for a feature, telling others what the testers and developers recommend testing, and why. This also helps to make your developers into testers and vice versa. We don’t really need an army of testers in our company.

Another type of session we continuously carry out are “split sessions”. In a split session, e.g. a developer and a tester test the same feature at the same time, but they do it separately. After the session (e.g. 45 min) the two come together and discuss what they found. This way, the testing becomes a bit of a competition, where everybody wants to find the most issues and the developer can become better by learning from the tester, and the other way around. That’s how we roll with exploratory testing.

 

How do you measure the success of exploratory testing?

That’s a tough one. I would say consistency is the key. Don’t expect perfect results immediately. It’s a journey, especially when the concept of exploratory testing is new to you. It’s like going to the gym. You go to the gym to shape your body, but it has no effect if you go the gym once for 12 hours. You need to work out every single day for about 20 minutes to succeed. Likewise with exploratory testing. I am convinced that long-term consistency beats short-term intensity. In fact, I believe that intensity is born from consistency. So, my advice to you is: Do it right, then do it fast! Then you will see awesome results. Now then, what are these awesome results?

Monitor your production defects. Monitor how the number of defects in production evolves over time. Does it decrease when you perform exploratory testing (given the fact that your formal testing approach remains the same)? If not, it might be due to the fact that you didn’t find the right balance between the effort you expend on exploratory testing and formal testing. So adjust it and monitor again. Or maybe you are using the wrong technique, or are using the technique wrong to translate exploratory testing into practice. Again, adjust it, and monitor again how the number of production defects respond.

Another measure of success is defect severity. This gives you a measure of how well your testers understand the product. You don’t care if your testers find gazillions of cosmetic defects, you want them to detect the ones that really matter – the ones stopping your customers from achieving their goals. The critical defects! In most cases these defects are the most obvious defects, but are not necessarily obvious to the testers. If you see that happen, reserve more time for learning. Give your testers the chance to talk to developers, product owners, or even customers. Give them the chance to improve their skills, and keep your patience. In the best cases, your testers are also using the product for their private purposes. The goal is for your testers to also become stakeholders.

Another measure is defect variety. Do you find just functional issues during the testing phase, but in production a lot of usability issues are filed? This would imply that you didn’t really manage to diversify exploratory testing well enough. So, diversify and change direction.

You could also measure what we call the formal testing leak. This quantity measures how many defects you found that would have been missed by formal testing (e.g. test automation, manual testing). One way to measure that is to apply some kind of A-B testing approach to testing itself. One team (A) just focuses on formal testing, and the other team (B) only on exploratory testing (at least for a limited amount of time, e.g. two sprints).

In addition, you could also measure the mean-time-to-feedback. How fast (e.g. minutes, hours, days) did it take you to provide feedback to your development? I am not saying that this is easy to measure, but it’s worth measuring. Maybe you could ask your developers to support you in monitoring this quantity.

 

Are there any good alternatives to session-based testing?

The short answer is: No, not really! James Bach once proposed a technique called “thread-based testing” as an alternative to session-based testing, but at its core it’s just, a generalized form of session-based testing. In thread-based testing the sessions are a form of thread, but a thread is not necessarily a session. In session-based testing, you test in uninterrupted blocks of time that each have a charter. A charter is a mission for the testers in a session; a light sort of commitment, or contract.

A thread, on the other hand, may be interrupted, it may go on and on indefinitely, and so it does not imply a commitment. Session-based testing can be seen as an extension of thread-based testing for particularly high accountability and more orderly situations. A thread can be seen as a set of one or more activities intended to solve a problem or to achieve an objective. You could think of a thread as a very small project within a project.

We approach exploratory testing via session-based testing. Session-based testing makes exploratory testing applicable to large-scale implementations, e.g. when multiple agile teams are involved in your development process. Because of its structured character, session-based testing is the ideal technique to make exploratory testing management-compatible. This can be attributed to its core object: The session.

A session identifies a starting point for exploratory testing. Session-based testing provides flexibility and freedom for the tester to choose what and how to test, while providing structure to guide the testers in their exploration. For that reason, session-based testing becomes accessible to skilled and unskilled testers.

 

What are the skills a tester should have to do exploratory testing in an agile environment?

 I would say that any tester need to be cautious, curious, and critical. This is what makes a great tester. This is what makes a tester hard to fool – and that’s crucial since the software product is constantly trying to fool us anywhere, at any time, in any way. So, from my point of view, great exploratory testers have the ability to…

Pose useful questions.

Describe and understand what they observe.

Interpret what they find.

They also need to be able to…

Draw the right conclusions fast.

Think critically about what they know.

Know they will never know everything.

Keep thinking despite already knowing.

In addition, a tester in an agile environment needs to…

Focus on things they don’t know, since that is what exploratory testing is about.

Target specific issues without losing focus.

Manage bias

Form and test conjectures by analyzing someone else’s thinking.

So, at the end of the day the most important ability of agile testers is the ability to learn through their imagination, to create new test ideas based on past experiences.

 

What is a suitable duration for an exploratory testing session?

First, let’s briefly outline the reason why you should timebox your sessions. From my experience, session timeboxing really helps to curb perfectionist tester’s tendencies, to avoid that a tester overcommits to some testing task, i.e. digging into the same hole deeper and deeper.

It also helps the session owner distribute the testing workload in smaller chunks to the testers, which leads to better creativity and higher motivation. Ideally the session timebox ranges from 30 minutes to 2 hours. So, I wouldn’t plan sessions that are less than 30 minutes or more than 2 hours.

Why not less than 30 minutes? Well, you must reserve some time for test preparation (e.g. setup test environments, configure applications), and you must reserve some time for error reporting. An effective testing time will be about 50% to 70% of the total time you will have available, which would simply be too little testing in a 30 minute session.

Why not more than 2 hours? Well, during a session you should exclusively focus on testing, so there shouldn’t be any e-mails, chats, or meetings in between. There shouldn’t be anything (not even toilet breaks) interrupting your exploration, otherwise the impact of your testing would dramatically decrease. I consistently see attention to detail dropping (almost exponentially) after about two hours of intense testing. So 2 hours is a wise time limit.

Leave a Reply

X
X