Jenkins build and multi-environment deploys

In our team, we can't run continuous deployment into our testing environments. Like most enterprises with large/legacy back-end systems, we only have a few up-stream instances running with populated data. That results in having a finite set of test environments: qa-1, qa-2, sit-1, sit-2, etc. QA would usually be stubbed out so having one per feature branch wouldn't be a problem but for SIT, having a consistent & known environment for integration testing makes this difficult.

Previously, I used Bamboo and it had the concept of Releases, Environments and dedicated Deploy Jobs. These allowed for builds to be made into releases (manually or automatically) and have those release deployed to specific environments using predefined deploy jobs, with only one release being recorded as deployed to an environment at a time. The advantage of this was it allowed our testers to easily see what was currently deployed and where, without having to dig through every job.

Jenkins 2 was released a while ago and the proposed structure using the pipelines is to have Stages as Deployment Environments. This is great for continuous delivery. However, in a lot of companies, you wouldn't want to immediately push to production (or to other environments) until you were ready. To allow this, it has an input step which will suspend the build until a manual user intervention occurs. This could be used to select which SIT environment you want to deploy to but it has 2 downsides:
  1. The deploy step cannot be re-run once it has been run. So you can't deploy build #2 to sit-1 and sit-2. Or you can't redeploy build #1 to sit-1.
  2. Jobs that haven't been acknowledged aren't marked as done. The jobs remain in a building state. They can have timeouts but that causes more problems.
The middle ground is to have a build job and a deploy jobs for each environment set (qa, sit, staging prod etc) and have these jobs parameterised. Depending on your configuration, you could have just one generic deploy job but there are trade-offs with visibility and safety of accidental production deploys.

In this example project we have a Node.js based AWS Lambda function that we deploy using CloudFront. To test each feature branch before merging, I wanted to have Jenkins automatically to run the tests, create and store a deployable zip and update the pull-request with the build status. Having a stored binary for each build means that the same builds that go through regression testing are the same binaries that get deployed to production, vs doing an npm install, bundling and deploying all in the same job which we were doing previously.

The multi-branch pipeline build

The built zip is published using the S3 Publisher plugin, as we deploy from S3. The regular Jenkins artifact archiver / unarchiver can be used with the same effect if S3 isn't needed. This also uses the NodeJS Plugin to save hassles with nvm. Also AnsiColor and Badge plugins need to be installed.
Here you can see the result of the S3Upload

Build job's Jenkinsfile

This uses the Declarative Pipeline syntax:
pipeline {
  agent {
    node {
      label 'my-node'
    }
  }
  tools { nodejs 'nodejs-6' }
  stages {
    stage('Install') {
      steps {
        // this just makes it a lot easier to view the workspace
        createSummary(icon: 'green.gif', text: "<h1><a href='${JOB_URL}/${BUILD_NUMBER}/execution/node/3/ws/'>Workspace</a></h1>")
        sh 'npm install'
      }
    }
    stage('Build') {
        steps {
          sh 'npm run clean'
          sh "npm run build -- --FILENAME ${FILENAME}"
          s3Upload(entries: [[bucket: "${S3_BUCKET}", sourceFile: "${FILENAME}", selectedRegion: "ap-southeast-2", managedArtifacts: true, flatten: true, uploadFromSlave: true]], profileName: 'S3-Profile', dontWaitForConcurrentBuildCompletion: true, consoleLogLevel: 'INFO', pluginFailureResultConstraint: 'FAILURE', userMetadata: [])
        }
      }
   }
   environment {
      FILENAME = "builds/build_${GIT_BRANCH}_${BUILD_ID}.zip"
      S3_BUCKET = 'my-s3-bucket/releases'
   }
   options {
      skipStagesAfterUnstable()
      ansiColor('xterm')
   }
}

The deploy job

In the case of the deploy job, I'm kept them as Freestyle projects to make it easy for our testers to go in and modify the different parameters' defaults. This deploy job creates/updates a CloudFormation stack, with the lambda using the file uploaded to S3 before. Unfortunately the S3 Plugin doesn't make this easy to get the url of the artifact programmatically.
The fingerprints create a link between build and deploy jobs
So for this example, create a Freestyle job.

Selecting the branch

  • Make sure the Extensible Choice Plugin "is installed
  • Tick, "This project is parameterized".
    • Add an Extensible Choice parameter,
      Choice Provider: System Groovy Choice Parameter.
    • Set the name to "BRANCH_NAME"
    • Set the following groovy script:
      def parentName = "test-project"
      import jenkins.model.*
      
      def childJobs(jobName) {
        def parentJob = Jenkins.instance.itemMap.get(jobName)
        return parentJob.getAllJobs()
      }
      return childJobs(parentName).collect {it.name}
      

A shared script to resolve the S3 url

  • As this script is generic and re-usable, I stored it in "Managed Files".
  • Install the Managed Scripts plugin
  • Go to Manage Jenkins -> Managed Files
  • Click Add a new config
  • Select Groovy file and give it an id of "GetS3ArtifactUrl.groovy"
  • Give it a name of GetS3ArtifactUrl
  • Set the content to:
import hudson.plugins.s3.*
import hudson.model.*
import jenkins.model.*

def lastStableBuild(jobName, branchName) {
    def job = Jenkins.instance.itemMap.get(jobName)
    if (job == null) {
      throw new Exception("error no job found for ${jobName}/${branchName}")
    }
    return job.getLastStableBuild()
}

def run = lastStableBuild(parentName, branchName)
if (run == null) {
  throw new Exception("error no run found for ${parentName}/${branchName}")
}

def action = run.getAction(S3ArtifactsAction.class)
def artifact = action.getArtifacts()[0].getArtifact()
if (artifact == null) {
  throw new Exception("error no artifact found for ${parentName}/${branchName}")
}

def s3_filename="s3://${artifact.bucket}/jobs/${parentName}/${branchName}/${run.number}/${artifact.name}"

return [S3_FILENAME: s3_filename]

Resolving the S3 URL

  • In the deploy job, under Build Environment, check "Provide Configuration Files".
  • Add file, select the file stored in the shared files. In this case, its "GetS3ArtifactUrl".
  • Fill the target as "GetS3ArtifactUrl.groovy"
  • Make sure the EnvInject plugin is installed
  • Check the "Inject environment variables to the build process" option
    • Set the Groovy Script to:
      binding.parentName = "test-project"
      binding.branchName = "${BRANCH_NAME}"
      return evaluate(new File("${WORKSPACE}/GetS3ArtifactUrl.groovy"))
      
  • To get the deploy job to pick up the fingerprint of the artifact, the simplest way is to download it using the S3 plugin as it automatically runs the fingerprint against it.
  • Add a build step, "S3 Copy Artifact"
    • Project name: test-project/$BRANCH_NAME
    • Last successful build
    • Stable build only
    • Directory (wherever you like) i.e. tmp/

More limitations

Whilst this does provide a link between the the jobs via Fingerprints, support for linking multi-branch job to other job are blocked by JENKINS-29913 / JENKINS-49197. Currently, Jenkins won't store and show links between the 2 projects. This means you miss out on all the nice dependency graphs and special dependency views etc.

Comments