Engineering the Tests that Matter
As I sit in my standard issued cubicle, going through a typical test results report from the typical external test shop, I begin to shake my head in disgust.
I’m looking at a spreadsheet with endless rows and columns that associate a thumbs up / thumbs down with each one-liner that’s supposed to describe the test that executed.
What does all of this mean? Does that mean the test completed? Aborted? What does a thumbs up mean if a high priority issue was logged against it? Wait! If I read through this issue, the comments trail indicates that the test itself — not the item being tested — was modified to get a passing result! What is all this? Does any of it actually imply any type of quality level in the product that I’m testing?
Going beyond the checkbox
My organization’s messaging system now starts popping up notifications — someone keeps poking me for status — and I’m asked to join a conference call. Looking at the numbers, and combining them with local execution results, I can confidently report that we’re 100% attempted with a 90% pass rate, so I say as much. “Great!” — said the other side of the phone — “we’re almost ready to exit the cycle”.
Does that statement mean anything though? There are no qualifiers to go with it, and no one is interested in listening to what’s failed, they only want the numbers, the percentages. Does this data actually lead to a sound business decision?
This was the life — and still is, in some cases — of many test “engineering” roles that I have been in or interacted with over the 14+ years of my professional career. I do understand that few people have gone through these same experiences though. In fact, it’s likely that most of you have never had to deal with a complicated set of actions, supported by an ever increasing line of enterprise products, most of which have 7–10 years of service warranties that come along with them.
However, if you step back and replace that endless spreadsheet with the report listing thousands of unit tests executed on 5 builds that occurred over the past couple of days, and add new releases from several external library dependencies — like say jQuery if you’re on the web, or a new version of a key Node.js library, or even a new rev of Python — over the same period of time, you can start to see some parallels in other parts of the engineering world, and I felt like talking a little bit about it.
What happened to the “ENGINEERING” in Test Engineering?
These days it seems that most folks get caught up in the latest project management buzzwords and consider the selection of tracking methodologies as the engineering effort, but that’s no where near the end of it.
“Engineering is the application of mathematics, empirical evidence and scientific, economic, social and practical knowledge in order to invent, innovate, design, build, maintain, research, and improve structures, machines, tools, systems, components, materials, processes and organizations.” — Wikipedia.
Engineers take existing knowledge from several fields and apply it in new (usually innovative) ways, to solve a problem — sometimes one that’s never been solved before — that meets certain constraints. It’s very common that these very constraints are what drive the need for innovation and creativity.
Agile vs Extreme Programming vs Waterfall, a few long cycles with general quality objectives vs many short cycles with plenty of iteration, formal specifications and product definition vs lose guidelines and user stories. Each of these contrasts seem to have champions and followers almost as if they were their own religions, but as an engineer I view *all *of that as nothing more than constraints. Constraints to which I have to design a set of systems, structures, tools and processes that will produce the desired outcome: a quality product.
Because that’s the ultimate goal right? A quality product? Almost. Even here there’s some struggle with definition. While I’m pretty sure that a primary characteristic of a quality product — as unrealistic as it is — is zero bugs, I can also confirm with the same certainty that a bug to one person could be a feature to a different user. This means we should revise the wording such that we can state the primary goal of a test engineer:
To verify that your organization creates a product that delivers the vision of the designers with the quality and user experience expected by the customers.
Also notice that our earlier definition included the words “empirical evidence”. This is such an important point that it cannot be overstated. A test engineer must use data from the past to inform his current and future decisions, and making that data available is a major consideration in setting up a good test organization.
We’re talking about everything from previous tests, their results, the number of times they were executed, the conditions in which they were executed, any and all environmental factors that may influence the results; to warranty costs, customer service tickets, customer usage metrics and environments, sales metrics and marketing expectations. All of this is relevant and extremely useful data that helps in deciding what to focus on, where to do it in more depth, when to focus on it, and what impact any findings may have.
Next time you have a pretty big project that needs some testing, get everyone together ahead of time and decide on the methodology, the scheduling and the measurement tools. Get that out of the way first, make sure you’re comfortable with it, and then sit back and let your test engineers do what they do best: figure out how to make all of that work along with their primary objective.
While I was at PyCon a few months ago, I had the opportunity to meet with folks from the python podcast community, and in discussing this very same topic, I was invited to record an episode of @Podcastinit, which has now gone live and is available here: Episode 68 — Test Engineering with Cris Medina
Over time, I hope to explore some of the hectic, crazy, and — at times awe inspiring — effort and creativity that is a test engineers shop. We’ll focus mostly on tooling, setup, planning, and any other items required to keep things going. So stay tuned.