Cloud INSIDER

Stay connected. Stay informed.
Contact Us

Automating your Image Pipeline: from Development to Staging to Production

Automating your Image Pipeline: from Development to Staging to Production

by Nitheesh Poojary, Six Nines IT, Cloud Architect

In this solution guide, we will discuss the use of Mutable Infrastructures vs Immutable Infrastructures. We will also show you how to implement an image pipeline for an Immutable Infrastructure and provide an example of what tools are needed. This is a detailed, step by step, guide on how to build an Immutable Infrastructure pipeline on environments ranging in size from a DevOps Workstation to Production Deployment.

Managing and deploying  infrastructure involves the following tasks:

  • Releasing new features and updates for your application  
  • Updating application dependencies
  • Applying security updates
  • Making configuration changes

Previously, we had to manually provision all of our servers.  We would log into each server, run commands, install software, and copy code. Once the server was successfully provisioned, we would follow all of the same manual processes on every server. As the number of servers grew, sysadmins started writing scripts to automate these tasks. As technology evolved, we started using configuration management tools for managing all of the infrastructure changes.

In the majority of DevOps deployments, we follow two unique processes for automated Infrastructure deployment and management:  

1: Mutable Infrastructure: With a Mutable Infrastructure, software is installed and updated after the instance is launched. First, we launch the base image. Then we install and configure the required software packages and deploy our code. Once the new image gets added to the auto scaling group, additional changes will be applied to the group of servers. To maintain our configuration and push new changes to the servers, we typically use management tools like Chef, Ansible, or Puppet.

Muted Infrastructure Deployment

SixNinesIT - DevOps

2: Immutable Infrastructure: Immutable Infrastructure is a more modern DevOps technique. It follows a process where any new images being deployed or configuration changes require a new image. This image will be baked with the new changes before the new servers are launched. This image is called a golden image or pre-bake image. Once the new instance, created from the updated image is added to the auto scaling group, it will be a functional server and ready to serve the users requests.

Both processes are fully automated and we can use the same configuration building, management, and deployment tools. The main differences between both of these processes are the deployment pipeline that is used and how we push updates to the production environment.  

Immutable Infrastructure was adopted from docker containers. When we deploy Docker containers, we create immutable docker images.  The idea of using a docker container is to speed up the deployment process. Docker containers can be booted with lightning speed, but if the configuration and code deployment takes time, this can be a drawback for using containers. With Immutable Infrastructure, we build the virtual machine or container image once, use it on one or more servers, and never once log in to those servers to make changes. With Immutable Infrastructure, only tested and functional images are deployed. If any infrastructure changes are necessary, they are applied to the base image rather than to running systems. Therefore, you do not need to enable SSH and manually modify the server. This increases security and lowers the chance of human error.

Immutable Infrastructure Deployment


Tools we use to build Image Pipelines

  • Vagrant
  • Packer
  • Chef-Solo/Ansible
  • Terraform
  • AWS Services
    • EC2
    • Autoscaling
    • System Manager
    • Code Commit
    • Code Build
    • CloudFormation

Requirements

  1. Install vagrant on your local workstation. You can learn more about installing vagrant in this guide: https://cloudacademy.com/blog/vagrant-chef-solo-ec2/
  2. Provision your infrastructure using Terraform and CloudFormation
  3. Create a repo in code-commit  “ops-cookbooks”

Technologies

Vagrant: Vagrant is a tool for building complete development environments. Vagrant can be used to create and configure lightweight, reproducible, and portable development environments. It used for testing your cookbooks locally before pushing to code-commit.


Packer: Packer is open source tool for automating the creation of golden images of your server. Packer is capable of using configuration management tools like Chef and Ansible to install and configure required software. This is one of important component when we talk about Immutable Infrastructure  

Chef or Ansible: You can use either Chef or Ansible as your configuration management tool for installing and configuring software on your server. In our case, we will be using Chef cookbooks.

Ec2 : AWS service where your application code will deployed.

Autoscaling: AWS service where a new launch configuration will be created for every build. Later, you will add your new instances to your autoscaling group.

EC2 System Manager: EC2 Systems Manager helps to automate tasks, such as, creating images and creating launch configurations.  We use EC2 System Manager built workflows to accomplish complex tasks.

Code-Commit: This is  Amazon’s  version of github. It is a managed source control as service.

CodeBuild: This is Amazon’s   alternative to Jenkins. It compiles cookbooks and runs tests against cookbooks.


Getting Started

For this guide, we will be deploying a sample PHP application using two different cookbooks. The first is a LAMP cookbook, which installs and configures Linux, Apache, MySQL, and PHP. The second cookbook is called “aws-cloudwatch logs” and it installs and configures the cloudwatch log agent. For local testing, we will be using AWS as the provider for vagrant. Let’s start with setting up a DevOps Workstation.

Setting up DevOps Workstation:

  1. Create two folders in your system: one for vagrant and another one for the cookbooks
  2. You can get vagrant up and running by following these steps:
    1. Download the Vagrant file from git and replace the original contents
      https://gist.github.com/nitheesh86/6670ec91a7b6e6dd61eb44aeefbaee95
    2. Develop your cookbook locally, in our case, we already have two cookbooks
    3. We are using the lamp cookbook for installing and configuring apache and php and the other to install and configures cloudwatch and codedeploy agent
    4. Now run —-> vagrant up –provider=aws . This will provision AWS EC2 instance and configures LAMP Stack for you
  3. If results are ok you can push your cookbooks to code-commit
  4. Further, you can deploy application code onto this instance and verify the application functionality
Please note: Instead of AWS EC2 instances, you can also use VirtualBox as a provider on your workstation

Once we are done testing our cookbooks locally now it’s time to push this cookbook to code-commit.

Creating Build Pipeline:

  1. Navigate to the CodeBuild Service under Developer Tools and Click on  “Configure Project”
  2. Fill out the details using the screenshot below as a reference

 

  1. This will launch a docker container to run our build. Whenever a user commits new changes, a cloud watch event will notify codebuild to trigger the new build.
  2. In this pipeline, the most important file is buildspec.yml. You should place this file in root of your cookbook repo “buildspec.yml”  
    1. Download the cookbooks from the cookbook repo
    2. Installs packer on the docker container
    3. We can also add few tests for our cookbook  like foodcritic  and chefspec
      1. Foodcritic will run tests like Style, Correctness, Syntax, Best Practices, Common Mistakes, and Deprecations
      2. Chefspec will run unit test on your cookbooks
    4. If all these test passes it calls the Packer command
    5. Packer will create an AMI with all cookbooks installed
    6. Once Packer creates the AMI, you will get the “AMI ID ” through SNS notification

Creating Launch configuration through AWS System Manager

Once we have the AMI ID, we can call the AWS System Manager automation document through buildspec.yml or manually. You can run the automation document from the console, however, I would suggest using the manual process. Whenever a developer wants to test their code, they will pick the AMI ID and update the automation document parameters. Next, the automation document will create a new launch configuration and add to the specified auto scaling group. This model is very similar to the Continuous Delivery process. In the Continuous Delivery process, every change is not deployed immediately but every change can be deployable at any time.


About Author: Ben Rodrigue

All Comments


    Write a Comment

    What do you think?

    Contact Us Today

    For a free consultation.

    Contact Us
    %d bloggers like this: