Command Execution Tricks with Subprocess - Designing CI/CD Systems
The most crucial step in any continuous integration process is the one that executes build instructions and tests their output. There’s an infinite number of ways to implement this step ranging from a simple shell script to a complex task system.
Keeping with the principles of simplicity and practicality, today we’ll look at continuing the series on Designing CI/CD Systems with our implementation of the execution script.
Previous chapters in the series already established the build directives to implement. They covered the format and location of the build specification file. As well as the docker environment in which it runs and its limitations.
Execution using subprocess
Most directives supplied in the YAML spec file are lists of shell commands. So let’s look at how Python’s subprocess module helps us in this situation.
We need to execute a command, wait for it to complete, check the exit code, and print any output that goes to stdout or stderr. We have a choice between call()
, check_call()
, check_output()
, and run()
, all of which are wrappers around a lower-level popen()
function that can provide more granular process control.
This run()
function is a more recent addition from Python 3.5. It provides the necessary execute, block, and check behavior we’re looking for, raising a CalledProcessError
exception whenever it finds a failure.
Also of note, the shlex module is a complimentary library that provides some utilities to aid you in making subprocess calls. It provides a split()
function that’s smart enough to properly format a list given a command-line string. As well as quote()
to help escape shell commands and avoid shell injection vulnerabilities.
Security considerations
Thinking about this for a minute, realize that you’re writing an execution system that runs command-line instructions as written by a third party. It has significant security implications and is the primary reason why most online build services do not let you get down into this level of detail.
So what can we do to mitigate the risks? First, consider that the script is running inside a container. The environment is restricting the process, so even if it does something malicious - like rm -Rf *
- it’s only going to destroy the filesystem inside the container. It can break the build, but it won’t affect the CI/CD service itself (which is running in a separate container anyway).
However, this doesn’t save you from other things it can do with your network or compute. For example, someone could install a crypto mining system and use up your CPU.
Along the same lines, they could go exploring inside your network, finding open ports, and taking advantage of other vulnerabilities.
On top of this, just running inside Docker doesn’t provide a tremendous amount of security, just another layer of obfuscation. It’s possible to break out of the container if you’re not careful.
You can always apply more limits to the Docker container itself to increase security. There’s a bunch of options available when creating the container using the docker Python module we covered in the orchestration chapter.
Both the run()
and create()
functions take the following parameters:
- cap_add
and cap_drop
to constrain kernel capabilities.
- cpu_count
, cpu_quota
, cpu_period
and other cpu_
arguments help limit CPU usage.
- device_
fields constrain read / write frequency.
- mem_limit
, mem_reservation
and others provide memory restrictions while oom_
parameters establish the container behavior when it runs out.
- A few network
fields are useful to separate each container such that it cannot talk to others.
- You can even use the pid_limit
to tune how many processes the container’s kernel is allowed to run.
Another thing to consider is the risky-ness of allowing shell execution from the subprocess
functions. It’s tempting to use it because it enables command piping, environment variable expansion, filename wildcards, and other shell features. It makes it easier for users to write the build steps just as if they were running from a command prompt.
In doing so, it opens the system up to shell injection exploits. A similar concept to that of SQL injection, where someone executes malicious code by formatting their input in a specific way. See here for more info.
However, our use case is somewhat unique. By definition, we allow a third party to specify a shell command to execute. So there’s only so much that we can mitigate, and container restriction is the primary defense against it.
The subprocess functions provide some extra help in the form of the timeout
parameter. It tells the system to kill any process that exceeds whatever value you specify, helping reduce the chances of long-running compute-intensive tasks.
I highly recommend performing a security analysis for your implementation in extreme detail before doing anything. Even if you’re using an off-the-shelf build system, it’s imperative to understand the risks.
You may find that security is less critical because you’re running in an internal network that’s segmented from both the public internet and the rest of the corporate network. On the other hand, you may decide that it’s best to not only limit the shell execution itself, but also the commands you can execute.
One last point to consider is the use of secrets. Remember that anything in the container environment or file system is up for grabs. Avoid using passwords, security keys, tokens, or other secrets in your builds.
At the very least, you should encrypt them and manage them through a secrets store like Hashicorp’s Vault. Never put any of that stuff in a repository because removing it is complicated. You’ll need to drop it from the commit history as well, not just your current head.
Implementing build directives
Now that we know the primary considerations let’s get into the implementation of each build directive.
Use environment variables defined during container creation to determine where in the build pipeline you are (dev, staging, production). They also provide the pull request and commit you’re building, as well as other relevant parameters.
You’ll find the code for each section of the build spec below.
pypi
Before running any installation steps that may pull Python packages into the container. We should first check the build spec for any special PyPI settings to configure. The format allows for the following attributes:
pypi:
index-servers:
some-server:
repository: https://some.index.server.com/pypi
username: some_user
password: some_encrypted_password
extra-index-url: https://some.extra.index.url.com/pypi
trusted-host: some.extra.index.url.com
find-links:
- https://some.find.link1.com/link1
- https://some.find.link2.com/link2
The index-servers
section goes into a .pypirc
file in the home directory which pip
uses when pushing code into a repository. Details on how to encrypt the password are out of scope of this post.
The extra-index-url
, trusted-host
and find-links
sections are also used by pip
when trying to find packages to install. Set them in the .pip/pip.conf
file inside a user’s home directory. For details on what they do, please visit the pip documentation.
We need to create those files inside the container when our execution script runs. Here’s a way to do that:
def generate_pypirc_config(name, config):
"""Create a .pypirc server section"""
out = [f"[{name}]\nrepository: {config['repository']}"]
if 'username' in config:
out.append(f"username: {config['username']}")
if 'password' in config:
out.append(f"password: {decrypt(config['password'])}")
out.append('\n')
return out
if __name__ == '__main__':
if config.get('pypi') is not None:
# PyPI settings are present in this build config
if config['pypi'].get('index-servers') is not None:
# Index servers are present, generate the .pypirc file with human readable indentation
servers = '\n '.join(config['pypi']['index-servers'].keys())
# Start with the list of servers
# [distutils]
# index-servers =
# server1
# server2
pypirc = [f"[distutils]\nindex-servers =\n {servers}\n"]
# For every server, add a new section with the config settings
# [server1]
# username = some_user1
# password = some_password1
pypirc.extend(['\n'.join(generate_pypirc_config(name, server_conf)) for name, server_conf in config['pypi']['index-servers'].items()])
# Write the .pypirc file
logging.info(f"Writing to /root/.pypirc:\n{pypirc}")
pypirc = '\n'.join(pypirc)
with open('/root/.pypirc', 'w') as f:
f.write(pypirc)
pipconf = ['[global]']
# Check if we need to include settings in the .pip/pip.conf file and add them to the list
if config['pypi'].get('extra-index-url') is not None:
pipconf.append(f"extra-index-url = {config['pypi']['extra-index-url']}")
if config['pypi'].get('trusted-host') is not None:
pipconf.append(f"trusted-host = {config['pypi']['trusted-host']}")
if config['pypi'].get('find-links') is not None:
links = '\n '.join(config['pypi']['find-links'])
pipconf.append(f"find-links =\n {links}")
if len(pipconf) > 1:
# Write the pip.conf file
logging.info(f"Writing to /root/.pip/pip.conf config:\n{pipconf}")
pipconf = '\n'.join(pipconf)
os.makedirs('/root/.pip')
with open('/root/.pip/pip.conf', 'w') as f:
f.write(pipconf)
Notice the use of lists when putting together long multi-line strings to concatenate and write to a file. You’ll find that using a join()
to put together the final string is more efficient than straight-up concatenation. You can test it in the interpreter.
install
The next part of our build workflow is to prep the container environment for build and testing. These are mostly setup and configuration instructions that install third-party dependencies, like Linux packages or Python modules.
def run_command(command):
"""Execute a blocking shell command and check its return code"""
subprocess.run(shlex.split(command), cwd=REPO_DIR, check=True)
...
if config.get('install') is not None:
logging.info(f"Running install steps...")
try:
for command in config['install']:
logging.info(f"Installing: {command}")
run_command(command)
logging.info(f"Installation completed successfully")
except subprocess.CalledProcessError:
logging.error('Installation failed')
sys.exit(1)
Very straightforward, the idea is to call subprocess.run()
on each item in the install
section of the YAML file. When a command returns an error code, it halts execution and returns its own error so that Docker container status also reports the failure.
The subprocess.run()
invocation is using the cwd
argument to specify the repository directory as the location in which to start execution.
The process inherits environment variables from the parent. So you’ll have access to all variables configured when creating the container, which includes repository, owner and commit information.
Setting check=True
is what allows us to catch the subprocess.CalledProcessError
whenever command execution fails.
I separated run_command()
into a function because you’ll reuse it quite a few times in later sections. It makes for uniform and easy to maintain code when there’s a need to change it.
Note that I did not add the shell=True
parameter to the run()
call, which means your execution capabilities are limited. I’m also using the shlex
module to split the string supplied by our users. As mentioned earlier, it’s more aware of how to create a list out of a shell command string, a requirement when doing anything complex where you can’t control the string.
linting
This section functions the same way as the install
part of the YAML spec. It exists only to report status separately into the GitHub Pull Request. Since we’re not touching on status reporting yet, there’s no need to go into more details here.
execute
The behavior of the execute section is almost identical to the previous two, except that it supports a new keyword that allows us to run processes in the background.
With it, you can test applications that run multiple pieces of themselves as background processes. Take a look at how to use it to test a REST API:
...
execute:
- background: flask run
- python -m pytest basic_server_test.py
You’re telling the build system to start the server in a background process - while still printing stdout and stderr to the same place - after which you start a test session.
Implementing this is also very simple, but in this case, you use the lower level subprocess.popen()
function to start the background task. It gives us a reference to the process object that can provide status when we need to check for it in the future.
Following is an implementation of the functions that help us start and monitor background tasks:
...
# Track the tasks in a dictionary
background_tasks = {}
def run_background_tasks(tasks):
"""Execute one or multiple shell commands in the background and add them to the list of background tasks"""
if not isinstance(tasks, list):
tasks = [tasks]
for task in tasks:
logging.info(f"Executing in background: {task}")
background_tasks.append(subprocess.Popen(shlex.split(task), cwd=REPO_DIR))
# Validate that the tasks started correctly
check_background_tasks()
def check_background_tasks(kill=False):
"""Iterate through the background tasks being monitored and verify if they failed, kill them if requested"""
for task in background_tasks:
if task.returncode is None:
# Task is still running
if kill:
task.kill()
else:
if task.returncode != 0:
# Task failed, communicate the failure and exit
logging.error(f"Background task {task.args} exited with return code of {task.returncode}")
sys.exit(1)
...
Adding the code that executes the commands defined in this section:
try:
for command in config['execute']:
# Run in background if requested
if isinstance(command, dict) and 'background' in command:
run_background_tasks(command['background'])
else:
logging.info(f"Executing: {command}")
run_command(command)
# Execution completed, check background tasks for errors and kill them if they're still running
check_background_tasks(kill=True)
logging.info(f"Execution completed successfully")
sys.exit(0)
except subprocess.CalledProcessError:
logging.error('Execution failed')
sys.exit(1)
staging
and production
Both of these sections execute commands just like the install
directive. However, they exist to differentiate where in the build pipeline the code is executing.
At these later stages, you are beyond functional or unit testing and are more interested in higher-level deployments and integration tests.
A good example is a repository with a Python package deliverable. You’ll use the execute
section to validate the package, staging
to produce a release candidate, push it to PyPI and test the installation of it, then production
pushes the final release. This looks like:
install:
- DO_SOME_SETUP
execute:
- python -m pytest run_some_tests.py
staging:
deploy:
- python setup.py sdist upload egg_info --tag-build=rc{forge-commit-count}
execute:
- pip install your_package --pre
- your_package --version
- python -m pytest your_package_integration_tests.py
production:
deploy:
- python setup.py sdist upload
execute:
- pip install your_package
- your_pacakage --version
Setuptools has this egg_info --tag-build
functionality to add a suffix to your build string. If your package version is 1.0.1
, adding --tag-build=rc2
pushes to PyPI a new package versioned as 1.0.1rc2
. It’s advantageous in various aspects of code releases, especially when you want to make a version available to customers for use in validating a fix before the final release.
The pip
installer is smart enough to resolve the correct version. 1.0.1
is later than 1.0.1rc2
, which in turn is later than 1.0.0
. It behaves as follows:
- Running pip install
before 1.0.1
is available, but while 1.0.1rc2
is out, will not install 1.0.1rc2
.
- pip install --pre
specifies that you’re willing to try pre-released code, so it does install 1.0.1rc2
in the same scenario.
- Once 1.0.1
releases, with or without the --pre
option, pip install
downloads the 1.0.1
release because it knows it’s the latest one.
The {forge-commit-count}
is a utility parameter that helps set the release candidate suffix to a number equal to the total commits in the pull request you’re testing. It helps with uniqueness and produces numbers that continuously increase.
Because we already pass the commit count as an environment variable during container creation, implementing it is a simple change to the run_command()
function shown earlier:
COMMIT_COUNT_DIRECTIVE = '{forge-commit-count}'
def run_command(command):
"""Execute a blocking shell command"""
if COMMIT_COUNT_DIRECTIVE in command:
# Expand the commit count env var
command.replace(COMMIT_COUNT_DIRECTIVE, os.environ.get('FORGE_COMMIT_COUNT'))
subprocess.run(shlex.split(command), cwd=REPO_DIR, check=True)
Packaging the execution script
As a reminder, the code written here today serves as the execution script that runs inside a container. However, the users chose the container image, and it can be any OS, with or without Python installed. Meaning, we have to package the script and copy it into the container once it starts.
Every time there’s a change in the execution script, we need to build it. We’re using PyInstaller to do that through the following command:
pyinstaller --clean --onefile --workpath forgexec.build --distpath forgexec.dist forgexec.py
It’s taking the forgexec.py
script - which contains the code discussed in this article - bundling it into one file, creating build artifacts in a forgexec.build
directory and writing the final executable to forgexec.dist
.
Whenever the webhook REST API server runs, it reads the latest file in forgexec.dist
and produces a .tar.gz
file for pushing into each build container.
Details on PyInstaller are available in this packaging article.
What’s next?
At this point, you have a full CI/CD system triggered by GitHub and running inside a Docker Swarm. It can build, test, and deploy code from any container without requiring a Python interpreter. We’re mostly done with base functionality.
The next chapter focuses on integrating more tightly with GitHub to view build status, results, and logs from inside the pull request that triggers it.