Join Our Newsletter

Uniquely Managing Test Execution Resources using WebSockets

Executing tests for simple applications is complicated. You have to think about the users, how they interact with it, how those interactions propagate through different components, as well as how to handle error situations gracefully. But things get even more complicated when you start looking at more extensive systems, like those with multiple external dependencies.

Dependencies come in various forms, including third-party modules, cloud services, compute resources, networks, and others.

This level of complexity is standard in almost all projects involving a large organization, whether delivering internal tools or external products.

It means you must put emphasis on developing test systems and mechanisms good enough to validate not just code, but those third-party dependencies as well. After all, they’re part of the final product and failing to interact with them, means the product fails.

Continue reading

Integrating Pytest Results with GitHub

When joining a new engineering team, one of the first things I do is familiarize myself with the dev and test processes. Especially the tools used to enforce them. In the past 5 years or so, I’ve noticed that a lot of organizations still use older tools that haven’t yet evolved to support modern practices. Even teams that purely develop software can find themselves working around cumbersome systems that hinder instead of enable.

What do I mean by that? Very few of these tools include useful interfaces to leverage integrations with other systems (like REST APIs). Most have no concept of modern dev practices like continuous integration or containerization. Almost all of them want to record pass / fail at a step by step basis as if you’re executing manually. The vast majority are built around a separation between test and dev (some even emphasize it). And a lot of them require the organization to hire “specialists” for the purpose of “customizing” the tool to the team. In my opinion, these types of systems coerce the organization to emphasize blame over quality and team boundaries over productivity.

I’ve been very successful at building long-lived alternatives to these systems in several organizations. I’ve done it enough to know which features are worth including, and which to leave to the test / dev engineers, especially after the advent of continuous integration and delivery.

Continue reading

Engineering the Tests that Matter

As I sit in my standard issued cubicle, going through a typical test results report from the typical external test shop, I begin to shake my head in disgust.

I’m looking at a spreadsheet with endless rows and columns that associate a thumbs up / thumbs down with each one-liner that’s supposed to describe the test that executed.

What does all of this mean? Does that mean the test completed? Aborted? What does a thumbs up mean if a high priority issue was logged against it? Wait! If I read through this issue, the comments trail indicates that the test itself — not the item being tested — was modified to get a passing result! What is all this? Does any of it actually imply any type of quality level in the product that I’m testing?

Continue reading
© Copyright 2020 - tryexceptpass, llc