Michael Bolton is a thought leader in the world of software testing, with over two decades of experience in the computer industry testing, developing, managing, and writing about software. As an international consultant and testing trainer, Michael gives workshops and conference presentations about testing methodologies and perceptions that specialize in Rapid and Exploratory Software Testing. Learn more
We invited the reputable cloud blogger, Ofir Nachmani to interview Michael Bolton. See what unfolded below:
Ofir Nachmani: “My interview with Bolton was really unique. We started with a shallow discussion about the cloud, that quickly moved to a very interesting discussion about software testing. Michael provided enlightening insights into the role of an individual software tester, the outlook a tester should have when using load testing tools, and the relationship between a product owner and a tester. He also covered basic fundamentals, such as how to test “the unexpected”. Even if you’re not a developer or tester, the information below will most likely shed light on an industry that directly affects us all.”
ON: As someone who instructs and consults testers all over the world, what are the issues and challenges that people in the industry face today, in particular when it comes to performance testing?
MB: I observe that organizations tend to put an emphasis on testing as a verification of things that they hope to be true. That’s okay as far as it goes, but it doesn’t go very far, alas. For example, many people start to develop their performance testing strategy by setting performance criteria that they believe the system must meet. However, you don’t need performance criteria to do performance testing. In fact, I’d worry about testers doing that, because of the risk of shifting focus to confirming a certain set of ideas, rather than to investigating a product and the risks surrounding it. It’s crucial to not only know but also to develop our ideas about important questions that need to be asked about performance. Many of those questions are not obvious or apparent at the beginning of a project.
In my view, demonstrating conformance with prescribed criteria is the least interesting and least important part of performance testing. The more important goals are to describe a system as it actually behaves, and to anticipate, investigate and discover performance-related problems. Focusing on an expectation or a desire―let’s say “ten thousand transactions a minute”―sets a pretty narrow scope for investigation. It leaves out the kinds of problems that we could encounter with individual transactions within that ten thousand; it leaves out looking for factors that might contribute to slowdowns; or it steers us away from considering factors that might influence a decision on when to optimize the code or to throw more hardware at the problem. It encourages us to count, instead of studying and describing the system. A conformance focus tends towards confirming answers to existing questions, instead of raising new questions.
ON: What factors should a tester keep in mind when load testing a website or application?
MB: To me, the mindset of a tester should be oriented towards identifying problems that threaten the value of a product. If you’re using performance testing tools, use them to learn, to help look for problems that weren’t anticipated in the first place. Complex systems have the capacity to surprise us. Great tools help to visualize what’s happening, to highlight patterns and inconsistencies, to identify stress points and choke points. Next to that, prior expectations and predictions aren’t that big a deal.
What a system actually does is far more important than what your expectation is. You might have a working hypothesis about the system as you design experiments, but the hypothesis probably isn’t that interesting compared to what you actually discover in the course of performing and analyzing that experiment.
To me, excellent performance testing isn’t about showing that the system can achieve some specified transaction rate―that’s demonstration that the product can work. Fabulous performance testing is about discovery—finding where the slow and problematic bits are, where the bottlenecks are, and what can interfere with successful transactions when we put a system under load or stress, or when we run it for a long time with lots of variation. I’d like my tools and my models to help me to develop and illustrate a comprehensive understanding of a product, and identify what threatens value and success. Part of that involves recognizing that there are different dimensions of success.
ON: What does a tester need to achieve? Where do you see testers in the software development chain?
MB: It’s my job to investigate a product so that my clients can decide if the product they’ve got is the product they want. Testers are investigators, and their objective should be to discover more than to verify; to be reporters, and not judges; to describe, not to make the business decisions. Testers do execute judgement over what might represent a problem to users, to developers, or to the business, then inform those who are responsible for making the decisions; our testing clients. Clients need information about problems and risks in order to make informed decisions about what they do next with their product and whether or not they deploy.
For example, let’s say we, the testers, observe that our service’s database is getting hammered with dozens of extra handshakes for each transaction. The designer and product owner may or may not find that to be a problem. However, it’s possible that the pipe is not going to be big enough to handle all of the expected transactions, and should be scaled up as a result. The product owner would then want to know what happens after scaling up, which we investigate, as well.
Testers should not make the decision of whether or not a product is good to go. They are not the decision makers. They can only provide a piece of the puzzle that the business has to assemble to make release decisions.
ON: So a tester returns with numbers… What about the actual workflow or use case? How does reporting these support an actual product’s value?
MB: Testers tell a story about the product, and numbers are illustrations of that story. They’re like pictures that come with a newspaper article, like the stats in a sports story. Maybe you’ve been to a football game or some other event, and then seen stories and statistics and pictures in newspapers and on TV afterwards. A good story describes the event from a number of perspectives, and useful stats and good pictures add depth and support to that story.
We have to be careful, though, to think critically about the stories that we’re telling and the stories that we’re being told. There’s a nice example in Nassim Taleb’s book, The Black Swan, a book that testers should read. On the day that Saddam Hussein was captured, a news headline reported that the price of U.S. Treasury bills had risen over worries that terrorism would not be curbed; half an hour later, when Treasury bills had fallen again (they fluctuate all the time) the explanation was that Hussein’s capture made risky assets more attractive. The same cause was being used to explain two opposite events―the rise and the fall. So it’s important to consider how we arrive at our conclusions, and how we might be fooled.
In the world of performance testing we look at certain numbers and certain patterns and use them to illustrate a story. I would argue that it’s the job of a tester to remain skeptical of the numbers and of our explanations of them, especially when the news seems to be good. A single set of performance data based on a single model can fool us, and fail to alert us to potential problems and to risks that might be there. We need tools to provide data that should be looked at from a variety of different perspectives to help us analyze and visualize.
ON: Lets discuss tools. How can I know that I am using the right testing tool?
MB: Instead of thinking of “the right tool”, try thinking about a diversified set of tools. Suppose you want to be alerted about problems in your home while you’re away: a smoke detector won’t really help you out when a burglar is the issue; for that, you need a motion detector. However, neither of those is likely to alert you when there is a flood. And they won’t help you if there’s structural weakness in the building and it’s in danger of collapsing.
A good tool is one that helps you extend your ability to do something powerfully with a minimum amount of fuss. I tend to prefer lightweight, easily adaptable tools in combination, rather than one tool to rule them all. There are plenty of dimensions to performance testing―not just driving the product, but generating and varying, or randomizing, data; monitoring and probing the internals of the system; visualizing patterns and trends; aggregating, parsing and reporting results.
I like to use tools not only to alert me about the problems that I anticipated, but to help me anticipate problems I hadn’t. Ultimately, I’m most interested in the surprises, and the unexpected.
Check out Michael’s software testing blog
Note: this post was originally posted in BlazeMeter blog