Michael Bolton likes hats. Look up a photo of the software testing giant and likely the first one you’ll find features a jaunty fedora-style hat. When I first met Michael, he was wearing another hat – a bright red baseball cap riffing on the Trump campaign slogan, “Make America Great Again.” His version: “Make Critical Thinking Great Again.”

This message is core to Michael’s testing philosophy. Though he and his Rapid Software Testing co-creator, James Bach, work within the IT sector, the core of their message hinges on the concept of critical thinking – applied to all areas of life. This thinking is manifested in pointed questions, philosophical deep-dives, and a refusal to accept the status quo. This also means that an interview on DevOps adoption can quickly take an unexpected turn…

 

Chelsea: Let’s start with the fundamentals. What is DevOps, and what do you think about it?

Michael: I’m a little cranky about DevOps. I’m bemused by it. Here’s what DevOps is, as it seems to me: Developers and Infrastructure/Operations people should work closely together, in order to provide services and help accomplish tasks and fulfill needs for the business. That is what DevOps is all about, right?

Well, that’s great, but my question is: How screwed up, exactly, did things need to get before that started sounding like a new, innovative, radical idea?

People are now talking about DevOps with the same sort of breathlessness that we used to hear about Agile Software Development, but with Agile there was some meat on the bones; really clearly stated principles. DevOps feels more like a marketing brand to me. I mean, it’s is a fine idea, but it’s not new. It’s simply what we should have been doing in IT all along:  no silos, no fortresses, no prisons, everyone working together to help people to get stuff done.

Chelsea: Could you argue that DevOps is just the cultural manifestation of Agile? I.e. DevOps is the necessary outcome of Agile requirements, by creating a cultural of communication and cooperation that enables Agile.

Michael: Requirements aren’t Agile.  The way people respond to requirements might be agile.  Agility is about recovering your balance quickly when change comes along and destabilizes things.

Yes, it’s true that communication and cooperation enables agility, but I don’t see how that’s a new or particularly enlightening idea. How does slapping the “DevOps” label on it help that?

In the Manifesto for Agile Development, what was the first stated value? Individuals and interactions over processes and tools. It feels to me like the talk about DevOps is mostly around a fascination with processes and tools—continuous integration and continuous deployment and continuous delivery, right?  I don’t hear a lot about the human stuff in the Agile principles:  face-to-face conversation, motivated individuals, reflecting on how to become more effective, trust… And I don’t hear a lot about the idea of sustainable pace. People mistakenly believe that agile is about speed, but it’s really about responding to change in a sustainable way.  Sustainable pace is important because speed can get us somewhere, but excessive speed, reckless speed, can kill.  Sustainable development seems to be one of the first things that gets dropped in a lot of so-called “Agile” environments.

Is there really business value in deploying a product dozens of times a day, or is that a fetish dreamed up by technophiles? Can we comprehend a product that we’re building at a frantic pace?  Frantic plus Agile equals FrAgile.

Chelsea: Where does testing fits in to DevOps and Continuous Delivery? That seems to be a question that many people aren’t sure how to answer. How do you reconcile Testing and DevOps?

Michael: We don’t need to reconcile testing and DevOps.  We might need to address the idea that checking a product with algorithms—getting the machine to press its own buttons—is all there is to testing. Checking the functions in your product as you build it is a fine idea, just as spell checking in real time while you’re writing is cool too.  But spelling checkers and grammar checkers don’t tell you whether your writing makes sense, or whether people will like it.

We do need to stop thinking about testing a narrow way. There is much more to testing than the confined, restricted notions some people have of it.  What is testing at its core? Testing is “an empirical, technical investigation of a product, done on behalf of stakeholders, with the intention of revealing quality-related information of the kind that they seek.” Cem Kaner said that years ago.  Far too few people paid attention to that.  Jerry Weinberg has said that testing is “gathering information in order to inform a decision.” People didn’t pay enough attention to that either.

James Bach and I declared in 2013 that testing is evaluating a product by learning about it through exploration and experimentation. That includes a bunch of other stuff: studying the product, questioning it, manipulating it, making inferences and conjectures about it, collaborating with other people, and so on.  You can do this on more than the software itself–you can explore requirements, perform thought experiments, question diagrams that model the product, challenge our ideas about it.

Another way to think about testing is as “applied critical thinking about a product”. We have the option of thinking critically as we build.  Shallow testing is pretty easy, but it’s hard for people in the builder’s mindset to shift from envisioning success to anticipating possible problems. Deep testing for rare or hidden or subtle problems is harder, and requires dedication and focus and skill. That’s why we think it’s a good idea to have someone take in a testing role from the outset and every step of the way.

When we’re building something complex, it makes sense to carefully test the ideas, test the parts, test how they fit together. Testing is how we learn things about the product as we’re building it, before declaring that we’ve built it.  So, when someone says to me, “Well, when do you start testing?” or, “How does testing fit in?”, my reply is, “Right away, and everywhere.”

Chelsea: While that may be the case, there are still a lot of people who seem to have no idea that testing even has a place in DevOps.

 Michael: That’s like saying that editing has no place in newspaper publishing, that tasting has no place in cooking, or that critical thinking has no place in life.  I can understand a desire to remove ponderous testing, or long “test phases” after long “development phases”.  But those ways of thinking about testing, were out of date in the 1990s, and they’re even more out of date today. Those ideas persist at least partly because there are lots of testers who don’t seem to be interested in studying their craft or broadening their ideas about it, and people who don’t identify themselves as testers are even less interested in that.

How many people have read a book on testing? How many people have read a good book on testing? Hardly anyone can say that they have, not least because there are so few good books on testing specifically. But there’s lots of good stuff on critical thinking and systems thinking and scientific thinking that we can apply to new technology.

There is this sense of complacency and stodginess in the testing business. We often fail to connect testing with other disciplines where testing-like activities are going on all the time: in science, publishing, editing, product development outside of the software industry—even in restaurants. People in the restaurant business are constantly refining their products and processes.

It’s common to hear DevOps enthusiasts saying, “We’ll test the product in production.” To look at it charitably, they’re saying, “Let’s pay attention to what happens in production”, and that’s actually a wonderful thing to do.  But some people seem to be suggesting that if we observe the product after we ship it, testing before we ship is unnecessary.  I don’t think that serves the customer well.  It’s like saying we won’t even taste a new recipe before serving it.

It might help if we looked at the way the aviation industry treats safety.  When it comes to problems with airplanes and with piloting them, the aviation industry learns from every problem. Part of the whole culture is to learn from every mistake, because if the airlines and the manufacturers don’t learn from their problems, people die.

I recently read a book called Black Box Thinking that contrasts the process models of the aviation business against those of the medical business. Since November 2001, there have been no lives lost to crashes any major airline leaving or arriving to the United States or Canada, with one  exception: an Asiana flight that crashed in San Francisco.  Three young women lost their lives because they were not wearing their seat belts. Everyone else on board survived.  Were it not for that, the major airlines would had a perfect record for 16 years. There have been some close calls, and there’s been some good luck.  Nonetheless, it is remarkable, magnificent, that human beings are capable of accomplishing a safety record like this.

Chelsea: Can that system be copied and implemented into other industries as well?

Michael: I hope so! When it comes to safety, at least, the aviation business learns from every problem. In the software development business, it sometimes feels like we don’t learn from ANY problem.

Chelsea: Where do you think the block comes from?

Michael: I think part of it comes from the youthfulness of our craft.  And for a lot of software, there isn’t much at stake.  The website was down for a few minutes? The screen is showing yesterday’s advertisement instead of today’s?  The algorithm didn’t set you up with your dream date? The animation of the character in the video game didn’t move perfectly realistically? What’s the real cost? Ultimately, it’s not that big of a deal. Nobody died.

In the healthcare business however, the stakes are quite high. And it seems that the medical device producers are not learning the same way the aviation industry is. But it’s also very easy to see one way why. The FDA guidelines do not cite any form of testing literature from the 21st century. Every single citation is from 1999 or earlier which, of course, predates mobile devices and  tablets—and even the Agile Manifesto.

Chelsea: Why are so many of these big organizations still using outdated Waterfall testing methodology?

Michael: Let’s start by being charitable. Big organizations perceive safety in doing things the way that they’ve traditionally been done, and the idea of getting things right the first time is powerful. The problem comes, I think, when people try to imagine and specify and build too much at one go.  The idea that we know what we want right now is seductive, so it’s tempting to make a big list and build it all.  But if things aren’t working very well, it takes months or years to become aware of it.

From a more cynical perspective, the large consultancies that provide services to big business and big government love ponderous, expensive models of development; you can charge a lot of money for a big staff of contractors for a long time to build all the things on the big list. Over time, there will be changes, because at the beginning of the project, it’s hard to be clear about what we really want, what we need to do, and what problems we’ll encounter. Then the consultancy can charge a bundle for renegotiating the contract.

In that kind of setup, there are lots of incentives to prevent faster feedback.  One of those incentives is avoidance of trouble and fear, based on the idea that you don’t have to deal with problems until you’re aware of them.

If you’re on a big project, here’s a way to avoid some trouble:  recognize that software is not going to solve EVERY problem, and certainly not right away.  We can develop a product in small steps, and add to it and refine it little by little as we go.  People, working with existing systems, can deal with what the software doesn’t do, or doesn’t yet.  Keeping people at the centre of things makes a system flexible, responsive, and humane.

Chelsea: Is Agile a way of addressing that issue?

Michael: That seems to have been the intention behind the Manifesto for Agile Software Development, and the principles.  The claim was that they were “uncovering better ways of developing software”, but a lot of that was recovering, I think.

Carefully considered requirements and specifications and baby steps and constant feedback got us to the moon, after all.  Jerry Weinberg will happily tell you that the Mercury program in the late 1950s was organized in a pretty agile way. Computer time in those days was expensive, so a lot of testing took the form of review and inspections before the product ever encountered the machinery.  Jerry’s book Perfect Software and Other Illusions about Testing shows that review doesn’t have to be long and tedious. If you design and build and test in small chunks and short cycles, you save time in the long run.

But the outside world—especially 60 years later—never saw the baby steps. Those got hidden inside the Big Idea of “the Apollo mission”.  From that perspective, it’s easy to miss the tiny projects inside the big project—and the critical thinking and testing that was a part of that.  Everything new and Agile and DevOpsy is new again.

Leave a Reply

X
X