Continuous integration for our native applications with Pygmy

profile picture


Cloud & automation developer

11 May  ·  11 min read

Building native apps comes with a slew of configuration options and complexities. This is why we built Pygmy, another one of our automation monkeys!

At November Five, we build a lot of native applications. When doing so, there are quite some configuration options to take into account: think about build environment, api environment, provisioning profiles, developer accounts, signing certificates, app names and icons, …

These options might differ depending on the phase the project is in: you want beta builds signed for distribution to the product team during development, but you need production builds that can be uploaded to the app stores.

In general however, each of our builds needs to go through the same basic steps:

  • Sync translations from PhraseApp
  • Install dependencies using the platform’s dependency manager (e.g. CocoaPods)
  • Create build(s) depending on environment and settings
  • Rename artifacts following naming conventions
  • Run tests and checks (linters, unit tests, UI tests, monkey tests)
  • Publish builds
  • Notify product team

Now, to eliminate mistakes, we can’t rely on a developer’s laptop configuration to perform such tasks. A single difference in one config file or a change in the order of one step can cause production issues that we didn’t foresee. We want every build going out to one of our clients to have the same high quality standard, so we implemented a continuous integration pipeline to automate those tasks.

The birth of the monkey

One of the first problems we bumped into, back in 2014 when this project started, was the burden of managing job configurations. Within November Five, we currently have a backtrack of over 1250 git repositories each containing several branches. This results in thousands of jobs to be configured.

Managing this by hand would be an error-prone (and probably full-time) job.

Also, back then, building iOS applications on the command line wasn’t all too easy. iOS development has always been facilitated with a great IDE called Xcode, but once you’re moving to the terminal to make a build, a lot of poorly documented parameters and build steps that change every year start to pop up. Not to mention the hassle of running multiple Xcode versions concurrently, installing and updating provisioning profiles or signing certificates.

This is when Pygmy came to live.

Pygmy is not only the smallest (and cutest!) monkey in the world, it’s also our tool to facilitate everything native when it comes to building, testing and publishing our builds.

Branching strategy

Before we dive deeper into what Pygmy does, we’ll first highlight the branching strategy our developers are following for every project.

Branching allows developers to easily collaborate inside of one central code base. When a developer creates a branch, the version control system creates a copy of the code base at that point in time. Changes to the branch don’t affect other developers on the team.

At November Five we use the Gitflow Workflow, a branching model designed around the project release. Features are created in isolated branches while upcoming releases are stabilised by using release branches. Using a dedicated branch to prepare these releases makes it possible for one team to polish the current release while another team continues working on features for the next release.


Using this workflow, we were able to increase our productivity with parallel development and maintain quality by stabilising our releases.

When defining what our CI system should do, we hook into this workflow to define which steps need to be executed and which artifacts need to be produced. For development happening on development or feature branches, for example, we don’t execute all (sometimes slow) lint checks and only produce builds for testing purposes. When a release branch is created however, production builds are created and all tests and production checklists are being executed.

CI Setup

Knowing all that, we can jump back to Continuous Integration: the process of integrating all code changes several times a day.

Each commit is verified by an automated process running on a build server, providing the product team with almost immediate feedback and allowing them to detect and fix issues early.

Our continuous integration setup consists of a Jenkins master node running on Ubuntu, two Jenkins slaves running on Mac OS X, and a single slave running on Windows 10. We’re using Max OS X for running all build processes in order to stay as close as possible to our developer’s environments. For Apple’s closed ecosystem it’s also a must: you can only run Xcode on OS X.

Unfortunately, keeping software up to date in OS X isn’t always easily automatable with a config management tool such as Ansible. This means we regularly need to take down one node to perform maintenance and updates, install new software, etc. The multiple-slave setup allows us to keep on building during that downtime.

From repository to Jenkins job

Let’s talk monkey now. In the first instance, Pygmy can be considered a management tool for Jenkins job configurations. Among other things, Pygmy is responsible for discovering new project repositories on Bitbucket. If you read the previous blog post on Baboon, you know that these repositories are created automatically when a product is created.

We wanted to have the flexibility of being able to change the job configuration of each repository if needed, but at the same time provide developers with a framework in which they didn’t have to think about CLI commands or build steps. On top of that, we wanted to treat the job configuration like code, and want to be able to see change history just as with any other source file.

To do that, we defined a file format which contains, in a descriptive format, which build steps are needed to deliver the build to where it belongs: the jenkins.yml.

Pygmy reads the job pipeline definition from our jenkins.yml file in the project’s root directory. This file is written by our developers, and can be templatized using Jinja2, which allows for all kinds of cool stuff to happen. Jobs can be created conditionally, based on e.g. the current branch or environment (development / production), or can be easily rebranded based on variables in a separate jenkins.config.yml file. An example jenkins.yml file is given below.

So, for every project that contains a valid jenkins.yml file, a pipeline of Jenkins jobs is created.

This is conceptually similar to Jenkins’ new “Pipeline as Code” feature. Version 1.0 of Jenkins’ declarative pipeline syntax was released earlier this year.

Although Jenkins’ declarative pipeline syntax has a similar structure to Pygmy’s jenkins.yml templates, it does have some disadvantages, in our opinion. Because custom build steps are invoked through shell commands (or through Jenkins plugins, which we won’t go into), the validity of commands and argument definitions cannot be derived directly from the Jenkins file. This makes it more likely for developers to write mistakes that cannot be caught before the build is actually failing.

Pygmy tries to solve this problem by integrating the jenkins.yml syntax and build command implementation more tightly. The arguments to build commands can be validated up front by parsing the jenkins.yml file. Build commands are written in Python (we love Python!), which allows for faster development than writing custom Jenkins plugins in Java.

While Jenkins’ Pipeline as Code feature is a good addition, these are the reasons we believe Pygmy allows us to streamline our processes more easily.

Each job has a name, a target platform, a list of build commands, and some optional configuration: a Slack channel, artifact paths to be archived, a job trigger, etc. Jobs can be triggered by git commits, using a fixed schedule, or by other jobs that have completed (this is how jobs form a pipeline).

This first job in the pipeline pulls/pushes the latest translations from/to PhraseApp, and commits any changes to git:


- platform: android
  name: Translations
  - type: phraseapp
  trigger: fast

After translations are updated, the main job is triggered. This job builds the project using Gradle, adds a Git tag containing the build number, and uploads build artifacts to Dropbox:

- platform: android
  blocking_jobs: spen-0000-spencer-android-framework-.*
    type: job
    job: Translations
  slack_channel: {{ slack_channel }}

  {% for project in builds %}
    {% for build in project.releaseBuilds %}
  - type: gradle
    switches: {{ project.releaseFlags }} {{ build.flags }}
    tasks: ":{{ project.project }}:lint{{ build.variant }}Release"
    {% endfor %}
  {% endfor %}

  - type: addgittag

  {% for project in builds %}
  - type: uploadtodropbox
    files: {{ project.project }}/build/outputs/apk/*.apk
  {% endfor %}

  {% for project in builds %}
  archive_artifacts: "{{ project.project }}/build/outputs/apk/*.apk"
  {% endfor %}

Finally, when the main build is finished, a job is triggered that runs an explore test on AWS Device Farm:

{% if 'exploring' in categories and exploring %}
- platform: android
  name: Explore
  - type: gradle
    tasks: clean assembleStagDebug
    project_name: {{ devicefarm_code }}
    device_pool_name: "android-phone-tablet-4-6"
    app_artifact: "app/build/outputs/apk/app-stag-debug.apk"
  slack_channel: {{ slack_channel }}
{% endif %}

Inside the build commands

In addition to configuring the build pipeline on Jenkins, Pygmy acts as a build tool inside of those Jenkins builds. The same jenkins.yml file that was used to generate the pipeline is now parsed during the build. If the current job configuration on Jenkins is out-of-sync with the definition in jenkins.yml, Pygmy updates the job configuration and restarts the build. When up to date, the build commands defined in jenkins.yml are executed one by one. If a command terminates with a non-zero exit status, the build will fail, and the error will typically be posted to Slack.


In continuous integration, we want to build often, and we want our builds to give us quick feedback. This is why it’s a good idea to arrange build commands in such a way that the most likely to fail are executed first. Unit tests are a common example of this.

Another way to speed up builds is by taking advantage of parallelism. Tasks like generating documentation can be executed in parallel from the main build, and multiple flavours of the same application can also typically be built in parallel.

While the jenkins.yml template allows our developers to design pipelines in the way they see fit, we also try to standardise their overall structure. For each platform, we have a default fork repository containing the jenkins.yml template for that platform, which is forked by Baboon at the start of every new project.

We also use Pygmy to enforce naming conventions in a number of places. Artifact file names, for instance, follow a common naming pattern that contains the repository name, branch, build number and commit hash. Before uploading build artifacts to Dropbox, Pygmy makes an API call to Baboon, who is in charge of the Dropbox folder structure, to figure out the correct path.


As you can see in the flowchart, the flow through the build pipeline usually goes something like this:

First, we inject environment variables into the build (API tokens, etc.). We then update translations, optimize images, and if changed: commit to git and restart build. We run automated tests and generate unit test- / coverage reports, run linters and generate reports.

With this done, we can build the actual project.

  • For iOS, build scripts are part of Pygmy and they invoke Xcode. This is similar to Fastlane, which we do use for certain subtasks. We’ll happily admit that Fastlane can work magic, but we always like to understand what’s under the hood – it’s how we learn. We also like to incorporate our own processes and naming conventions, which is easier with our in-house tool. And we do use it for some subtasks of the chain.
  • For Android, a gradle config is part of the project.
  • For Windows, we use a combination of Nuget, MSbuild, and Cake
  • For Python, we create the package and upload it to S3 using S3PyPI
  • And for static websites, we use a combination of npm, bower, gulp, and grunt.

When the build is done, we can run our automated release checklist using Capuchin (you’ll learn all about this next monkey in a future blog post).

In short, Capuchin checks for November Five-specific rules (such as ‘all debugging output should be disabled in production builds’ or ‘November Five license file should be present in every repository’). It will generate a HTML report on every build, and may fail the build when unacceptable rule violations are discovered.

Pygmy then handles any other custom build steps, like add git tags, etc, and the post-build commands:

  • Upload build artifacts to S3, Dropbox, Fabric, App Store or Google Play
  • Publish test / lint / Capuchin reports to Jenkins job page
  • Run tests on AWS Device Farm
  • Post build status to Bitbucket API
  • Post build status or custom message to project’s Slack channel.


Monkeys! Monkeys everywhere!

In addition to building apps 247 on Jenkins, Pygmy is also used as a CLI tool by our developers.

Some of the current use-cases include:

  • Validating jenkins.yml before committing (this can also be configured as a pre-commit hook)
  • Syncs assets from Dropbox, as delivered by the design team, to the project folder

The Pygmy CLI codebase is structured in a way that allows every team to write their own commands, and distribute them as separate Python packages. This modular structure is achieved using so-called namespace packages in Python. All optional modules are placed into a pygmy.extras namespace package, which is shared among different source repositories. Developers can install pygmy-extras-common, containing the Dropbox sync command, on their laptops without having to install all Jenkins build functionality, which resides in pygmy-extras-jenkins. As a result, when a developer introduces an error in some optional module (before his first coffee), that won’t cause any Jenkins builds to fail

And that’s it – time for a banana!


Pygmy has seriously improved our native applications workflow – we’re quite proud of him, as we are of all our monkeys!

Does this monkey business sound good to you? We’re hiring! Check out all of our open positions here.

profile picture


Cloud & automation developer

11 May  ·  11 min read