Let’s face it: testers have an image problem. Where the development team is viewed as a team that generates revenue, the QA team is viewed as an overhead.
It’s a sorry state of affairs that I discussed recently with Paul Gerrard, the internationally renowned, award-winning software engineering consultant, author, and coach.
We asked: how do we change this situation? How do we take testing from something that’s viewed as a cost to something that’s recognized as an opportunity? Why, in fact, do we even need testing?
Remember: Testing is a Fundamental Human Activity
To understand why we need testing, we need to recognize that testing is a fundamental human activity. We taste food before we commit to eating it. We test-drive a car before we buy it. We interview people before we hire them.
Testing is a fundamental element of software development, too. The first thing a developer will do when they write a line of code is to test that it works.
Testing, therefore, isn’t a ‘nice to have’ (or even a necessary evil) in the software development process; it’s an inevitable part of it. The only question is how much time and effort we are willing to dedicate to the task.
The Problem with Requirements
Now, whatever we test, we test against a model in one form or another. You bring expectations about the way food will taste. You bring pre-conceptions about a car. You have to know you’re looking for someone to do something before you conduct the interview.
In software development, it’s the same thing. We can’t build the software and then make it do what we need it to do after we’ve built it. We’ve got to build the model so we can build the system.
We call this model the specifications requirement.
At its simplest, it’s the developer testing the code against the mental model they’ve created about what’s required. At the other end of the spectrum, it’s a document that runs to thousands of pages.
This specifications requirement is where the problems start.
The role of a specifications requirement is to iron out ambiguities. But this is a complex exercise. Developers will bring one worldview with them. Business analysts will bring another. End users will bring yet another.
It’s one of the reasons capturing an accurate specifications requirement is too time-consuming and expensive for everything, except for the most safety-critical or high-integrity environments where failure isn’t an option.
And even then, there will be gaps.
End users may not share complete use cases, perhaps because they’re suspicious of the software development process, they don’t understand the purpose of the exercise, or they simply forgot the edge cases. Developers will understand one thing from a definition; business analysts will understand something different.
Add in siloed thinking or conversations that got forgotten, and you have a recipe for ambiguity. It’s why establishing a perfect specifications requirement is virtually impossible.
The developers will do the best they can with the specifications requirement, but there will be issues. Some of these will be the result of the imperfect specifications, and others will be because of the workarounds that need to be introduced to wrangle the specification into something feasible.
The result? Software with lots of flaws.
When Testing is Under-Valued
This is the point when the test team takes over. In the most common scenarios, our ability to add value is constrained. Consequently, the view of testing as an overhead is reinforced.
Often, the development team is fully aware that there are issues but is willing to sweep them under the carpet because of a demanding release cycle. When it reaches us, we aren’t given enough time to test, or we’re asked to test something that’s so flawed it’s basically untestable. We do our best, but we know there’s room for improvement.
It may even be that the development team and the testing team sit in different business siloes – or even different businesses. We can run tests, but we haven’t been present at discussions about what the software needs to do or what good looks like. We can find and fix bugs, but there’s a limited knowledge about the bugs that are causing users the biggest problems on the ground.
In both scenarios, the result is that the user experience isn’t as good as it could be. If it’s an internal system, confidence in it will be low, and productivity will be impacted. If it’s a customer-facing system, users will be irritated, and the brand will be damaged.
When Testing is Valued
Investment in testing solves all of these problems. Why? Because as testers we have the critical thinking abilities and the outsider’s ability to test what’s actually happening against what’s supposed to be happening. We’ll consider all the different ways it could fail. We’ll think of all the scenarios that could be possible but weren’t put into a requirement. We’ll spot the gaps, the ambiguities, and the anomalies.
Our role has the capacity to deliver the piece of software that everyone thought they were getting when they signed off the specifications requirement. The piece of software that boosts productivity or delights customers.
When Testing Takes on Even More Value
It’s a clichĂ© that the earlier we find a bug, the cheaper it is to fix. This is the value of shift left. Our ability as testers to spot the conflicting requirements and ambiguities has enormous value in the very earliest scoping conversations. We can enhance the specifications requirements by eliminating issues before they become embedded in the code.
When we’re part of these conversations rather than looped in later, we show why it’s important to test and add more value to the business.
Gain More Insight
If you’d like to explore these ideas in more depth, watch the recording of my LinkedIn livestream with Paul: A New Model for Testing 2.0. Aside from thinking about why we test, we also covered a lot of other ground in our conversation – you can read about Using AI in Testing here.