The term Continuous Integration (CI) is no longer a fancy new topic in the industry. With the recent rise of such services as Travis or CircleCI, everyone can use a free CI server for their open-source projects or buy a paid support for private ones.
Catching bugs before they reach customers has never been easier provided you invest in some form of automated tests to run each time a code changes. Even though most of us have access either to the above-mentioned services or to the company’s CI servers, sometimes a personal CI system may also be helpful.
What’s Wrong with External CI Servers?
To answer the question in the header, I want to quickly reply: there is nothing wrong with them per se. But the fact is, we could improve our workflow a bit by adding a personal CI as a local gatekeeper. The way CI servers are designed, they react when you push your changes to the repository, not sooner. Sometimes this may mean that you push a broken commit into a branch someone else is working on. Even though this should not be the case with gating systems such as Gerrit in place, not everyone is lucky enough to be using one.
The second thing that may make you consider a local CI is the build queue on a server. Even though CI aims for instant response when too many people push their changes, build queues get saturated and you end up waiting in line not knowing whether your changes actually pass the tests.
With personal CI running on your own machine, you could do both quick checks as well as full-fledged testing as soon as you commit your code. Since you are the only person who submits tasks, you get direct response to your action. This way when you actually want to push your changes you are assured tests shouldn’t fail.
Let’s assume you are using Git as a version control system, since it is both really popular and quite easy to customize. One of the benefits Git offers in the presented case is its distributed architecture. To commit a code, you do not need to connect to the remote server, which means you could run automated tests first and only then present your changes to the world.
Git offers the concept of hooks that basically are programs to be run when a predefined event occurs. Some of the events are pre-commit, pre-push, post-commit, or update. While they are described in the official documentation, most of them should be self-explanatory. These programs running as hooks can be compiled executables or interpreted scripts. Typically, shell scripts are preferred, as they can be quite easily extended. Also, a return value of 0 means everything went fine and the hooked Git action can proceed. Any other return code means the action is aborted.
This way we can for example make eclint run each time we are about to make a commit in order to check if our files are compatible with EditorConfig. Just put the following into
.git/hooks/pre-commit: #!/bin/sh eclint check $(git diff --cached HEAD --name-only)
Remember: on UNIX systems, hooks need to have an executable permission set. You can run:
chmod 755 .git/hooks/pre-commit
The previous scenario required us to have eclint already installed on our machine. Even though the use case can be pretty helpful, it still doesn’t save us from the dreaded “but it works on my machine” problem. To deal with that we would like to tackle the problem from another perspective. We should prepare a separate Docker container with our application and run tests there.
The approach would be a little different depending on whether you already use Docker in your project or not. If you do, building a container the usual way and running tests in it should be enough. If your repository already contains some form of build automation, such as make, CMake, rake,build.sh it’s best to use it thus keeping aligned with other users of the same repository. If not, the following snippet may work (as long as you have a Dockerfile in the root of your repository and a Django application):
APPNAME="$(basename $GIT_DIR):$(git rev-parse HEAD)" # Build an image using Git root as a context docker build -t "$APPNAME" -f "$GIT_DIR/Dockerfile" "$GIT_DIR" # Run the tests saving the return code RC=$(docker run --rm "$APPNAME" python manage.py test) # Remove the testing image docker rmi $APPNAME # Exit with tests' return code exit $RC
If you don’t have an existing Dockerfile, there is a bit more work to do. We need to prepare the image manually and then run tests inside. Assuming a Django application with no external system dependencies, we could use a script like this:
#!/bin/sh APPNAME="$(basename $GIT_DIR):$(git rev-parse HEAD)" # Start a long running process docker run -d --name "$APPNAME" python:3.6.3-alpine3.6 /usr/bin/yes docker exec "$APPNAME" mkdir -p /usr/src/webapp docker cp requirements.txt "$APPNAME":/usr/src/webapp/ docker exec "$APPNAME" pip install -r /usr/src/webapp/requirements.txt docker cp . "$APPNAME":/usr/src/webapp/ RC=$(docker exec python /usr/src/webapp/manage.py test) docker stop "$APPNAME" docker rm "$APPNAME" exit $RC
Saving the aforementioned script as a .git/hooks/pre-commit will run the build and test procedure each time you are about to commit new changes. This may not be ideal for everyone, as some people like to commit work in progress locally and rewrite the history later on just before publishing a branch for a review. If this is you, you can either save the hook as .git/hooks/pre-push. Alternatively you may run git commit –no-verify to skip hooks execution during this particular commit.
Always On Service
Now that you know how an easy CI can be created locally, let’s move forward with implementation. If the idea of setting up hooks does not seem very pleasing for you, you can set up a different kind of personal CI by establishing a proper CI server. Systems such as buildbot or Jenkins are offered as Docker images. Setting them up is generally easy, but gets more complicated if you want to build a proper multi-node configuration.
For example, a simple setup of buildbot can be achieved simply by running:
git clone https://github.com/buildbot/buildbot-docker-example-config cd buildbot-docker-example-config/simple docker-compose up -d
After that you can direct your browser to https://localhost:8080 to check the status of the master. You need to remember that unlike Jenkins, in Buildbot configuration is not accessible through a Web UI. In order to use Buildbot with your projects, you need to supply it with a proper master.cfg which can either be located in your filesystem.
BUILDBOT_CONFIG_DIR) or as a tarball hosted on an HTTP server (BUILDBOT_CONFIG_URL). Depending on your viewpoint, this can be seen as a problem or as a advantageous feature. On one hand, quick changes to building pipelines cannot be made through WWW, they need to be prepared separately as a code. On the other hand, the entire pipeline can be version controlled this way and can be verified itself prior to its introduction.
One interesting feature of Buildbot is its Travis compatibility shim. By using it, you get the benefits of storing CI configuration along the source code. Contrary to the original Travis CI service, users of non-GitHub hosting solutions can take advantage of this approach as well. Any Git hosting supported by Buildbot is welcome. This means you could run your own local instance of a simple Git server and still benefit from Travis-like CI without sharing your source code with third-parties.
Buildbot is lighter on resources than Jenkins so it makes for a good unobtrusive first choice. However, should you prefer Jenkins, your own copy is only one command away:
docker run --name personaljenkins -p 8080:8080 -p 50000:50000 -v /var/jenkins_home jenkins:alpine.
This article showed you that setting up a local CI server can be quick and brings some benefits over centralized CI. You can spot errors almost instantly, you can customize which tests do you want to run or which linters to employ prior to submitting your changes to server. With the right approach the overhead is minimal. Git and Docker can be especially helpful to provide a lightweight solution without wasting resources. I encourage you to try it with your own projects.
= = =
Piotr is an automation enthusiast who aims to replace all repeatable tasks with code. He exercises his urges working as a DevOps Enforcement Agent. Never without headphones around.
He is also part of the IOD family of experts. If you’re interested in blogging about the IT topics YOU are an expert in, leave a comment or join us by clicking on the yellow + sign featured throughout this web site.