Containerizing and version controlling dependencies and utilities

Containerizing and version controlling dependencies and utilities

I’ve been using Python for years, as many SRE’s have, it makes quick work of automating repetitive tasks. It’s essentially the defacto scripting language for many teams out there. One of the biggest pain points is having to maintain both python2 and python3 packages with pip, and ensuring you are not corrupting your local machine with various versions of these packages. It can be quite the headache.

One way that I’ve discovered to overcome installing these local dependencies, managing virtualenvs, and the dreaded, “It works on my machine” statement all together, is by running utilities as containers.

I’ve been a part of teams that have built internal tools or relied heavily on third-party tools, and everyone usually has their own way of implementing it on their machine. Maintaining them as containers and calling them from a shell alias simplifies the implementation. It has many other benefits as well, outside of simplifying the usage and implementation, you are also promoting consistent usage of the tools. Teammates are now collectively resolving issues with the usage of the tools, only once. Any gaps in communication and documentation that would have been left out or assumed before are now closed up organically.

You can now version control every dependency, that is responsible for successfully leveraging the utility. If a user found that they needed a new feature you can easily add it to the repository. If you have pipelines configured for your repositories you will automatically build it and push the changes up to the local container registry. Others can now easily pull in this new version of the container without having to edit their aliases, building it locally, or updating any packages on their machine.

docker pull aws_account_id.dkr.ecr.us-west-2.amazonaws.com/sre-team:0.9.1



Fostering consistency

On one of these teams, we had a common tool that was required, Ansible. There was a hard requirement for maintaining multiple versions of Ansible on everyone’s machine. We had a lot of devices being managed on prem or in the cloud. Some of these devices required specific versions of Python and different dependencies. In order to simplify things we packaged Ansible inside of a container, each with its own list of dependencies, a requirements.txt file, and a specific version of Ansible.

This was then aliased in your shell to simplify and absolutely eliminate the hassle of maintaining pip2 and pip3 dependencies and virtual environments. You can pass in all sorts of prereqs as well to the alias like ssh keys, vault secrets, or aws credentials, all you needed to do was update your alias.

alias ansible-2.9='docker run --rm -it -v ~/.aws:/root/.aws -v ~/.ssh/keys:/root/.ssh/keys -v $(PWD):/opt/ansible'

You could then run your Ansible commands as:

ansible-2.9 web-server.yml -i inventory/hosts -l webservers-east -u webadmin



Internal tool example

Say that your team maintains a tool that is used to perform certain tasks, you could containerize this app, and then use it as it were installed on your system using this method. You could even use it in bash oneliners on the commandline like it was an installed binary. Let’s take a look at what that would look like.

I have a small repository that has a few boilerplates for Python commandline tools.



The Python application itself is all inside of the src directory. We’ll be focusing on two files in this directory. The python_cli_framework.py file provides the interface for picking arguments passed in on the command line.


python_cli_framework


This file python_app_framework.py provides a class and a method that performs an action based on the arguments parsed.

python_app_framework


We’ll call these two modules from /bin/python-app-exec.py

python-app-exec



Cloning the repository - python-cli-utility

Clone the repo with the following command:

git clone https://github.com/nullconfig/python-cli-utility.git $HOME/python-cli-utility

With the repo cloned, navigate to the root of the repository, and we can begin building the container. I’m going to assume that you have docker installed for this, and if not you easily resolve this by clicking this link, and following the instructions relevant to your operating system.



Building the container

From the root of the repository you will run this command:

./build-container.sh

It will execute the code found inside this file.

#!/bin/bash

# Execute this script from the root directory of the repository to build the container
/usr/bin/docker build -t python-cli-utility -f $(pwd)/containerd/python-app.containerd $(pwd)

This is a basic docker build command, it tags the image as python-cli-utility, builds it based on the contents of the file python-app.containerd found in the containerd folder, and sets the context to the root of the repository. If docker is not installed at the location /usr/bin/docker it will fail. Update this file if the path to where docker is installed doesn’t match.

FROM python:3.7.9-alpine3.12

# Install python application requirements
ADD ./containerd/requirements.txt /opt/requirements.txt
RUN pip install -r /opt/requirements.txt

# Configure the interface and make it executable
ADD ./bin/python-app-exec.py /usr/bin/python-app-exec.py
RUN chmod +x /usr/bin/python-app-exec.py
RUN ln -s /usr/bin/python-app-exec.py /usr/bin/python-app-exec

# Setup module installation
WORKDIR /opt
ENV INSTALL_DIR=/usr/local/lib/python3.7/site-packages

ADD ./src/python_app /opt/python_app
ADD ./setup.py /opt/setup.py

ADD README.md /opt/README.md
RUN python setup.py python_package
RUN python setup.py install

# Move python package and clean up
ADD ./src/python_app /usr/local/lib/python3.7/site-packages/python_app
RUN rm -rf /opt/*

ENTRYPOINT ["python-app-exec"]



Configure a shell alias and run the application

Now that the container has been built we can configure the shell alias to start interacting with the new app. If you followed the previous post you will be adding the alias in the $HOME/.aliases directory. If you don’t have an alias file you can add this to your shells rc file, and source it when finished.

alias python-cli-utility='/usr/bin/docker run --rm -it python-cli-utility'
alias python-cli-utility='/usr/bin/docker run --rm -it --entrypoint /bin/sh python-cli-utility'

With the file updated source it to pick up the new aliases. source command variations:

. $HOME/<your_alias_file>
source $HOME/<your_alias_file>

You can now run the application with this command

sudo python-cli-utility -u nullconfig get | jq .[] -c | grep microservice | jq .

With a list of users you can also iterate over that list with a command like this

for user in {nullconfig,ansible,prometheus}; do python-cli-utility -u $user get | jq .[] -c; done

Github rate limits unauthenticated api calls so be careful with this one.

So there we have it. We took an example of a Python utility that could theoretically be maintained in house, containerized it, and then called it from the commandline from an environment alias. We then wrapped it with bash to loop over the same command passing in a new argument on each iteration.

comments powered by Disqus