Unifying Organization-Wide CI/CD Workflows: Establishing a Centralized Repository for Azure Pipelines

Organizations today strongly emphasize streamlining their deployment processes and expect to ship their applications within minutes. It has become pivotal for DevOps engineers to adopt a top-of-the-line Continuous Integration and Continuous Delivery (CI/CD) approach to achieve this goal.

Azure Pipelines (Azure DevOps pipelines) is one such tool that delivers such a goal. In this blog post, we will look at the following topics:

  • Understanding Azure Pipelines
  • Why use Azure Pipelines?
  • Key concepts
  • Classic vs YAML
  • Templates
  • Centralized repository setup

The Motivation behind Centralized Repository Implementation

The idea for a centralized repository setup for Azure pipelines sparked while I was working on multiple projects within the same organization. Before this setup, whenever I had to create a new pipeline configuration, I found myself copying and pasting files and templates from existing pipelines, resulting in a lack of uniformity and reusability. Subsequent updates further complicated matters, resulting in discrepancies in pipeline template files from other pipelines and thus making it inconsistent and difficult to set up. This setup also lacked a unified and standardized approach to pipeline configuration throughout the organization’s applications.

After understanding these problems and doing some research for the solution, I came up with this centralized Azure pipeline repository implementation. As you read through the blog below, you'll discover how to seamlessly incorporate this system and elevate the efficiency of your CI/CD workflows.

Before we dive into the blog, I would like to highlight that this blog's scope extends beyond Azure Pipelines users and is relevant to a broader CI/CD audience. It is designed to provide insights into the concept of Pipeline as Code (PAC) in a broader sense, helping anyone interested in streamlining their CI/CD processes and enhancing their workflow. Whether you're utilizing Jenkins, Travis CI, GitLab CI/CD, CircleCI, or any other CI/CD tool, the principles discussed here can be applied universally. Also, this blog is not limited to only Azure cloud users either as Azure pipelines are cloud agonistic, it offers the versatility to deploy applications on multiple cloud platforms and in on-premises environments.

Understanding Azure Pipelines

Azure Pipelines is a cloud-based CI/CD platform provided by Microsoft as part of their Azure DevOps services. It allows development teams to automate their software delivery processes by offering a flexible and scalable environment to build, test, and deploy applications across a variety of platforms and environments. With Azure Pipelines, developers can integrate seamlessly with their preferred source control system, whether it be Azure Repos, GitHub, or Bitbucket to trigger builds and deployments based on code changes. This enables teams to achieve faster time-to-market, reduce manual intervention, and ensure consistent quality throughout the software development lifecycle.

Why use Azure Pipelines?

Azure Pipelines is one of the top CI/CD choices out there that offer quick and easy solutions with a variety of features. But when it comes to deploying applications to the Azure cloud, it is certainly the best tool due to its seamless Azure integration.

Azure   Pipelines features

Microsoft itself uses Azure Pipelines internally in its CI/CD workflows. The Microsoft .NET Engineering Services team, which supports a large and complex set of open-source projects, including .NET Core uses Azure Pipelines for their workflow. Taking into consideration the sheer scale and complexity of these projects and making it continuously work to build, test, and ship to make it available for the developers shows us the power and flexibility of Azure Pipelines. Source

Key concepts overview

 

  • A trigger is an action that starts the pipeline execution. This action can be configured upon the manual run, repository pushes, scheduled intervals, or after prior builds.
  • A pipeline outlines the progression of the continuous integration and deployment process for your application. It comprises one or many stages that orchestrate the execution of testing, building, and deployment tasks.
  • A stage serves as a logical partition within the pipeline structure, often representing distinct roles like build, QA, production, etc. Each stage holds one or more jobs, and when multiple stages are configured, by default they proceed in sequence.
  • A job can be defined as a unit for a group of steps that runs sequentially. The job runs on an agent whether it can be Microsoft hosted or self-hosted. Every pipeline has at least one job.
  • An agent can be defined as a server or a compute resource that executes the steps within a job. Microsoft provides its own Windows & Ubuntu hosted agents, but we can also set up our agent by running the agent software on our server. Which agent to choose depends on how we want to configure our pipeline.
  • A step is the smallest fundamental unit of a pipeline. For example, a pipeline might consist of build and test steps. A step can be a task or script.
  • A task is a pre-packaged script that acts, such as invoking a REST API, deploying on Azure app service, or publishing a build artifact.
  • An artifact is a collection of files or packages published by a run. Artifacts are made available for subsequent tasks, such as distribution or deployment.

 

Classic vs YAML

There are 2 ways to create an Azure Pipeline:

  1. Classic editor (GUI)
  2. YAML

    1. Classic Editor

Classic editor is a GUI provided by Azure DevOps to set up the pipeline where you add the list of tasks and change settings and configurations on the Azure DevOps portal. It offers a sense of familiarity and is less daunting at first glance. It doesn’t require you to write any code which reduces the learning curve, particularly for individuals without a background in development and hence it is easier to use and setup. But in the long run, it might become difficult to maintain if you have lots of pipelines and now it has become the old way of setting things and is getting deprecated.

Representation of a classic setup for a simple .NET core application

 

Pros of Classic Pipeline:

  • User friendly – Classic pipeline offers a conventional visual interface which can be less intimidating upon an initial encounter and provides a pre-defined CI/CD set of tasks for plenty of languages making it a friendly experience.
  • Easy to set up and configure – To set up classic pipelines you don't need to write code, so there is less learning curve which makes it easy and quick to set up. It also provides guided configuration settings by having a pre-build task which makes configuration easy too.

Cons of Classic Pipeline:

  • Deprecated – Classic is old and now deprecated which means that they will receive fewer updates, no new features, and limited support over time, which might impact their long-term viability.
  • Limited reusability – Reusing them across projects might still require significant manual adjustments due to the lack of code-based reconfigurability.
  • Lacks proper version control – Classic pipelines are not stored as code in version control systems like Git and this makes it harder to track changes and collaborate effectively.
  • Accidental deletion risk – If by accident, your pipeline gets deleted from the Azure DevOps portal, you will lose your pipeline forever.

 

2. YAML (Recommended)

YAML (Yet Another Markup Language) based pipeline is the newer and an industry-standard way of setting up a pipeline. Here pipeline code is written in YAML syntax and stored in the repository thus giving the advantage of version control and development-like practice for the pipelines. It can get a bit difficult to set up at first, but Azure DevOps provides a task assistant that helps you find the tasks you need, converts the task UI into YAML code, and adds them to the file. It needs some learning curve as working on YAML is not easy at first but once ready it is efficient and more reusable than the classic way.

Representation of a YAML pipeline for a simple .NET core application

Pros of YAML Pipeline:

  • Reusability – The YAML pipeline can be broken down into templates that can be reused across different pipelines and projects thus promoting quicker setup and reducing duplication effort.
  • Version control – As YAML files are stored in a version control repository, this allows users to revert their code, view history changes, and track commits.
  • Easier collaboration – Collaboration becomes easy as pipelines are stored as code which allows development like branching & merging strategy and allows multiple developers to work on it at the same time. It also allows code reviews, pull requests, and in-code comments for developers.
  • Easy migration If you decide to migrate from one repository to another or between Azure DevOps organizations accounts, then the process is often simpler with YAML.
  • Standardization: YAML pipelines encourage standardization by providing a consistent way to define pipelines across your organization. This reduces the likelihood of configuration errors and misunderstandings.

Cons of YAML Pipeline:

  • Learning curve – It can be challenging for teams that are new to the YAML and Azure Pipelines concept. Thus, it might require some learning curve before they become proficient in creating the YAML pipeline.
  • Does not support some features – YAML pipeline does support some features like Task groups and deployment gates that the classic pipeline supports.

Who Wins?

While Classic pipelines offer a friendly interface for newcomers, YAML pipelines emerge as the clear victor. With a version-controlled code structure, streamlined collaboration, and a reusable approach it is a straight-forward winner.

From my experience, when comparing previous projects set up in Classic with the current YAML setup, I can certainly say that it proved to be a better approach. Not only did it solve the lack of version control and code reusability, but it became a clear and standard approach for setting up the pipeline. If you are already using Classic pipelines in your project setup, then I would highly recommend that you switch to YAML. Going forward in this blog, we will only consider YAML pipelines as our CI/CD approach.

YAML Templates

Let's take an example of the following YAML pipeline where we are trying to build and deploy a simple .NET core application on an Azure app service:

azure-pipelines.yml

 

Here we have divided the pipeline into 2 stages, the build and release stages. In the build stage, we run the dotnet restore, build, test, and publish tasks, and in the end, publish the build artifact for the release stage. In the release stage, we download the artifact and deploy it onto the Azure app service. We have also added a pipeline trigger that will trigger the pipeline when a commit or a ‘PR merge’ happens on the main branch, and it will perform as expected and deploy the latest changes on the Dev app service.

Now what if we want to add multiple release deployments for other environments in the same pipeline? For this new environment release stages need to be added like Dev release with the ‘DependsOn’ property depending on the build stage. Thus, this will result in duplication of code with few value changes. Similarly, suppose you have an identical application in the repository, and you must set up a new pipeline for it. Here too you must copy the same pipeline code and paste it with different values.

The ideal practice of coding is to make code clean, readable, and reusable. And the same practice should be followed for pipelines. Instead of writing long and repetitive YAML code, the best practice is to break it down into multiple standard YAML templates and values should be passed as parameters to these templates. This way pipeline code can be reused, easy to configure, and set up. So, if we switch to templates in the above example then the pipeline will look like this:

azure-pipelines.yml

stage_build.yml

stage_release.yml

 

variables.yml

Now we have made a couple of changes. We have broken down the previous pipeline setup into reusable templates. Firstly, we moved all the variables into a variables.yml template and used them in the root pipeline file. This allows us to keep all the variables in one place which is easy to access and update. Next, we created separate build and release template files and passed values via parameters. Creating a separate build template file will allow us to use the same template for other application pipelines and the same template can be used to build Azure function applications too. Release templates can be reused for multiple environments by passing different parameter values. In our case, we have 3 environments (Dev, Stage & and Prod) where Dev is auto deployed once the build is successful. For Stage and Prod, an approval step is added where the selected group of people can approve it first before the actual release step triggers. This can be done by creating an Azure Environment in Azure DevOps where we can assign the people who can approve or reject the release.

As a result of these improvements, our pipeline setup has become significantly cleaner and highly reusable than before. The use of reusable templates optimizes efficiency and maintenance, making it an invaluable asset for managing complex pipelines. Although you can also break down the stage template more and create separate job and even task templates to make it fully templatized and parameterized from top to bottom but for the sake of keeping this blog simple and informative, I just stopped at stage templates.

Centralized Repository Setup

Switching to templates certainly improved the quality of the pipeline but still, it has one drawback. The above templates and changes were limited to that repository only. If we must set up a new pipeline on a different repository or set up a standard CI/CD process throughout the organization, then again, we have to copy-paste the same pipeline code from this repository and it is something that we need to avoid. A better solution here would be to keep all these templates in one centralized repository and then access it in the root pipeline file (azure-pipelines.yml) in those repositories. This repository should not be just limited to .NET applications, but this can be used for all the other applications used within the organization by breaking down into stage, job, and task templates.  

Let's look at how this will change for the above example:

  1. Move all the templates into a pipeline repository. So, this repository should look like this:

Pipeline repository template structure

   2. In the azure-pipelines.yml, add the ‘resources.repositories.repository’ block for this repository:

azure-pipelines.yml repository

   3. Finally, to use these templates as a reference, use ‘@’ with the repository alias at the end of the template path:

                         azure-pipelines.yml stages

 

And that’s it. That’s all the central repository migration changes needed. During the pipeline definition parsing phase, Azure DevOps identifies and resolves template references (those with "@" symbols) and performs any necessary checks and validation. Once this is checked, the platform locates and fetches the referenced template content into the main pipeline configuration. Once the pipeline executes, it will perform the steps sequentially based on this configuration.

 

Conclusion

After switching to this setup in my current organization, I can certainly say now that it significantly improved the pipeline setup and configuration as it has become cleaner, more efficient, and easier to work with. I understand that the whole setup from YAML file creation to centralized templates can become overwhelming at first but that is the point of Pipeline as Code. Like any other application or code, it starts small and eventually, it grows slowly. Similar will be this setup, writing templates like code by working on feature branches, dividing work into sprints, PR merges, and finally once ready using the templates from the main branch into the root pipeline file. The same will be the case if a change is needed in the template that is referenced in all the pipelines. You start the change in the feature branch, test this change in each pipeline and once everything is working fine then you merge it into the main branch. That’s how a Pipeline as Code setup should be built. To outline the principal advantages of this setup, here are a few:

  • Standardized and reusable setup of pipeline
  • Efficient and easy-to-create and maintain templates
  • Easy collaboration with teammates
  • Version control
  • Cross projects/applications template setup
  • Gradual growth and scalable
  • Quick adoption of best practice for CI/CD workflows

Now the idea of templates and unified setup of it is not just limited to Azure Pipelines. If you are working on any other CI/CD tools and think a similar setup can improve your current DevOps structure and is also doable, then I will highly recommend you start thinking about converting it. To summarize, let's remember that a standardized one-stop pipeline as code setup isn't just about convenience but it is about ensuring your DevOps processes remain agile, secure, and adaptable and become a unified process throughout your organization.    

Acknowledgement

This piece was written by Ameya Pawar, Senior DevOps Engineer and Team Lead at Encora.

New call-to-action

About Encora

Fast-growing tech companies partner with Encora to outsource product development and drive growth. Contact us to learn more about our software engineering capabilities.

 

 

Share this post

Table of Contents