DevOps is about accelerating delivery of new products and services at scale, reliably and affordably. Doing this requires comprehensive IT ...
You are here
Get Features to Customers Faster with Continuous Integration and Delivery
Continuous Integration and Delivery (CI/CD) is what happens when a DevOps team adopts all the best practice recommended so far:
- Treating infrastructure and code together
- Shifting left and testing early
- Automating deployment and testing on self-similar QA, staging, and production environments with the goal of radically speeding up delivery of working code to customers
While implementation details can be complex, the basic idea isn’t. CI/CD is essentially a two-step process. At the head end, you have developers, writing code and checking it into a version-control system (like Git). With each unit of code they produce or change, developers should provide or update automated unit tests. They should also create or update integration tests that determine whether individual components work together; plus automation to build components, deploy them along with testing tools, monitoring agents and other utilities, execute tests, and report results. When commits are approved, affected components are built, deployed, and tested automatically. Tests should, of course, evaluate observability along with other characteristics: does the component produce logs correctly, connect with and report to monitoring infrastructure and/or APM services? Are its performance (at this point, in relative isolation), resource, consumption and other variables in line with expectations?
Many tools can be used to coordinate and manage the process. Examples include Jenkins or Spinnaker, servers that coordinate automation and trigger routines for building, deploying, and testing; and Gerrit, a code-review coordination system for Git that keeps track of approvals, test results, and other facts advancing the progress of code along the assembly line.
What we’ve described so far is continuous integration: the shift-left part of the process, which seeks to identify issues as early as possible and pushes them back to developers for resolution. Since, as you’ll recall, 56% of bugs originate early in development, it makes sense to expend effort here to eliminate them. This step is best performed every time a change is committed, since the fewer changes are involved, the easier it is to determine root causes.
Between continuous integration and the delivery step that follows, a packaging step may be interposed, where binary artifacts (such as built binaries of a particular version of a database required by your product) are stored in a specialized repository (e.g., JFrog Artifactory) and VM or container images are built and pushed to appropriate repos. (In the case of containers, Artifactory can do this too, or you could use Docker Registry.) Packaging binaries, images, and/or containers at this point simplifies and speeds up subsequent QA/Test and production deployment, eliminating steps that would otherwise need to be performed by automation on target systems.
Thereafter, the continuous delivery part of the process can begin. Again under control of the automation server, automation deploys and configures your app from built objects taken from the package, container, and/or image stores. Deployment is typically performed first at small scale, onto a QA platform. Integration and functional testing are performed and the process is halted if issues are found. Again, the point is to discover and fix bugs early. If all tests pass, the application is deployed to staging and retested – sometimes adding performance and load testing to the mix. Finally, the application is deployed to production. Some organizations (like Netflix) perform this entire process every time a change is made to code. That’s called continuous deployment.
If desired, the process can be enhanced by packaging objects earlier: for example, containerizing a changed component with all its dependencies, and deploying it on a unit and integration-testing platform based around a container orchestrator – one that’s identical in all but scale to the QA, staging, and production environments, because it’s produced from the same infrastructure codebase. Container build and push processes are typically very quick, so this adds little time overall. In this model, components are made immutable after commits are approved, but before any testing is done, then simply ported from (functionally identical) platform to platform, integrated and scaled as the current deployment stage requires. It’s important, however, to note that coders are only permitted to add code, not binary assets or container images, to the process, so that builds and packaging can be supervised. Otherwise, there’s a risk of introducing opaque dependencies and security flaws (a 2017 study found that officially-validated images on Docker Hub contained a mean of 76 vulnerabilities each).
Final deployment out of a CI/CD process can be customized to serve specific needs. Client-side components of a mobile app, for example, may be automatically submitted to online stores. Tested binaries of a commercial app can be pushed to trusted delivery repositories. Cloud-based applications can be deployed using Blue/Green methodology, creating a new production cluster, deploying the app, rerouting traffic, then decommissioning the prior cluster and reclaiming its resources. Another option is Canary Testing, where a new build is deployed to a production platform, then traffic from select customers forwarded to it, while leaving the prior deployment online to serve the majority of users. Customer feedback, along with other forms of monitoring and in-service testing, are used to determine if the build is worthy of full release, or needs to be rolled back.