Jenkins Pipeline on your Local Box to Reduce Cycle Time

I recently got the chance to present at Jenkins World 2017 in San Francisco with Luca Milanesio from GerritForge. The talk was about how we used Jenkins to reduce the life-cycle of software development in our team (video of the talk can be found here).

The idea

Delegating logic to clients is not a new pattern in software development. Having powerful machines in the hands of end users, for example, allowed the shift of complex and resource consuming logic to be moved to browsers towards the end of the 90s.

Why not apply the same logic to our CI/CD? The tagline of the last iMac pro was “18 cores in an iMac. No, that’s not a typo”. Why not leverage this power instead of multiple potentially broken builds? We gave it a try and we ended up with a fascinating discovery, it solved multiple problems we were facing in our team.

Local and central pipeline

We created what we called a local pipeline and a central pipeline. The former consists of a Jenkins pipeline running on Docker and hosted on each developer machine, building changes happening in a local git repository. The latter, is the main Jenkins pipeline hosted on the central CI/CD server.

The main idea is that the local pipeline only pushes changes to the central git repository, triggering the central pipeline, only once a build passes locally. Think about it as a pre-wash cycle on your washing machine. The local pipeline creates back pressure on the central server which gets involved in the CI/CD only once the local build is successful.

It is important to notice a couple of details:

  • The build on the local pipeline starts as soon as a commit is done locally, without the need to do git push. This lead to a further reduction in the cycle time as there is no need to wait for a push to trigger the build
  • The git push to the central repository happens automatically from the local pipeline itself when a build is successful. The developer doesn’t have to care about it, avoiding the classic human mistakes like: “I forgot to run the test locally” or “I cannot find your branch…ops…I forgot to push it!”
  • The problems we solved

    Scaling the number of builds served by a single Jenkins master is a common issue and this is a way to resolve the problem. This is not the only problem we are tackling with this approach, here’s something else we managed to fix:

    • Non repeatable builds: Local builds on developers laptops are typically influenced by the cached dependencies and leftovers files built on the IDE. Similarly, the central pipelines may be biased by a different levels of compiler and dependency resolution or, even worse, influenced by statically allocated slaves.
      Both problems lead to non-repeatable builds, which is a trusted CI/CD pipeline’s worst enemy. By building both local and central pipelines on top of standard shared and disposable Docker containers, we can make sure that every build is always executed at the same level of compiler and dependency resolution. No more “works on my local box” scapegoat from developers.
    • Merging of companies: The merging of the YOOX and Net-A-Porter groups has naturally affected technical aspects of how we work. Part of this was the deduplication of tools and systems used by the two companies, including the CI/CD pipeline and its team. During the transition period we needed to create a unique central CI/CD pipeline so we could continue working on our projects. The creation of the local pipeline allowed us to start working on our project as a team. Once the central pipeline was ready it was just question of chaining the two together.
    • One master cannot suit all: this is a common problem for a team having to manage a central CI/CD pipeline; a single pipeline cannot suit all the needs of all teams. Different teams might need different plugins but there are often incompatibility issues. Having a local pipeline gives the flexibility to each team to manage their own Jenkins setup without stepping on each other’s toes. It also allows teams to experiment with plugins or a particular Jenkins setup in isolation, before promoting them to the central pipeline. Power to the people!
    • Need for fast feedback: User-acceptance testing in an integrated environment is a common problem while working on a large-scale application. The classic scenario is having a dedicated server where people need to “queue up” to deploy the artifact for the feature they are working on, and show it to the relevant stakeholders. To speed up this process, we tweaked our local pipeline to spin up a Docker container with the feature code connected to the upstream services. The application is easily accessible from a hyperlink exposed in the Jenkins build description (check the code here).

      This allows us to speed up the overall software development cycle by providing stakeholders the ability to test multiple features in isolation. Developers get feedback earlier in the process, *even before* their code hits the target Git repository. This is what we call Just In Time UAT and it has proved useful when the team has been under pressure to deliver multiple features in parallel with challenging timelines.

    Last but not least, a by-product of this approach, we can have the entire Jenkins Server Dockerfile definition in the same repo and branch, as well as the local and central Jenkins pipeline scripts. This links code and target deployment environment, speeding up the on-boarding of new developers and really embracing the aim of DevOps.


    This is not the end of our DevOps adoption journey; it is just the beginning of a learning and continuous improvement exercise. The ideas explained and experimented with in our project, are the result of daily meticulous data collection and analysis to understand where the bottlenecks of our pipeline were located.

    The next steps are to extend the scope to other parts of our pipeline that need acceleration. One of them is the adoption of a proper Code Review tool (e.g. Phabricator or Gerrit Code Review) which would allow us to log the time spent by developers interacting into the Git repository. We are also missing an external static and dynamic code quality tool, which is typically only linked to the central pipeline but could be highly beneficial in the local one as well.

    Print Friendly
    This entry was posted in Continuous Delivery, Events, Software Engineering and tagged , , by Fabio Ponciroli. Bookmark the permalink.

    About Fabio Ponciroli

    Fabio is a Senior Software Engineer at Yoox-Net-A-Porter Group where he works in one of the backend teams mainly responsible for the catalog API used by the different e-commerce sites of the company. He has extensive experience in working with Perl, NodeJS, Scala and related ecosystems. He is originally from Milan in Italy where he got his master degree in Telecommunication engineering at Polytechnic of Milan. He spent a number of years working in Milan as a consultant in the telecommunication industry. During this time he worked for various companies including Vodafone, H3G, FastWeb. In 2007 Fabio moved to London and has work in different companies, from start-ups to corporates. Github: barbasa

    Leave a Reply