Increasing Efficiency with Containerized CI/CD

Increasing Efficiency with Containerized CI/CD

Understanding Continuous Integration and Continuous Delivery

Continuous Integration is a software development practice where members of a team integrate their work frequently, leading to multiple integrations per day. Each integration is verified by an automated build which includes testing to detect integration errors as quickly as possible.

Continuous Delivery is a software development discipline where software is built in such a way that it can be released to production at any time.

Continuous Delivery is often confused with Continuous Deployment. Continuous Deployment means that every change goes through the pipeline and automatically gets put into production.

Continuous delivery can be achieved by continuously integrating the software done by the development team, building executables, and running automated tests on those executables to detect problems. Continuous Delivery builds on the notion of continuous integration, dealing with the final stages required for production deployment.

Understanding what is a container?

A Container is a standard unit of software that is lightweight with software components that bundle and package the application, its dependencies, and its configuration in a single image, running in isolated user environments on a traditional operating.

Good practices in Continuous Integration and Continuous Delivery

There are several good practices that are known, adopted, and implemented in the CI/CD culture. A subset of those is what is considered here to bring enhancement to the overall process and better efficiency.

  • Automate the build – Use scripts, tools, services and providers to automate the process. Consistency and reduced human errors are yields.
  • Test the build – Test the application using various quality gates. These could be unit tests, linting, static code analysis, dependency vulnerability testing, compliance testing, performance testing.
  • Use a clean environment – Any execution when ideally run in a clean environment would reveal any flaws or gaps in the product or packaging. The idea is to avoid having any form of impurities from tools, operating systems, execution residues from previous builds of the same or other applications.
  • Use a clone of the production environment – The final aim is to run an application in production, and thus it would be best to clone that environment as identically as possible when building and testing the application.
  • Keep it fast and lightweight – A given execution should not take ages to run to reveal flaws. Parallel executions, multi-threading and breaking down of larger and slower quality gates into smaller and fast executable gates would be beneficial. Borrowing from the foundations of micro-service architecture, the smaller and faster quality gates should be lightweight to be easily put in place and replaced with better implementation if needed. The lightweight nature also allows for faster execution time as less is needed to execute.
  • Adoptable – The tooling, execution environments and technologies should thus be easily interchangeable and customizable to allow easier adoption by various teams whilst upholding the best practices, organizational criteria, governance criteria and quality standards.
  • Scaling – As teams, number of applications and microservices, number of quality gates and organizational requirements grow, scaling becomes an important consideration when designing or choosing a CI/CD platform.

Recognizing the subset of good practices, containerization helps address the shortcomings of the traditional execution environment and provides a better platform to implement good practices:

  • Automate the build with containers:
    • Containers are designed to bundle the correct tool, version and other execution assets in a package.
    • This makes it easy to build a given application using a given set of tools and scripts as the package in which to execute is well prepared and ready to run.
    • The container itself can be controlled for execution by several orchestrators and providers such as AWS ECS, Kubernetes Jobs etc
  • Test the build with containers:
    • Like automation of the build, the testing tools and scripts can be packaged in separate containers.
    • Each form of quality gate can be containerized and run separately.
  • Clean Environment with containers:
    • The containers are inherently clean. A well packaged image would not contain any impurities or cached data that would affect the execution of a given build.
    • A new container is created for every build and destroyed immediately. This ensures no old content is affecting the current execution and neither will the current affect the future builds.
    • Isolation: Parallel executions of similar products would also not run into a race condition or sharing of resources if all resources could be containerized. Thus, each execution would obtain its own set of containers to run all tests and then be wiped out.
    • Security: As each execution runs in a separate container, there is inherently enhanced security as any rogue scripts cannot access data or resources from other executions. This protects the current as well as other applications from data breaches during CI/CD.
  • Clone of production environment with containers:
    • Production grade containers with the same tools and versions, operating systems, capacity specifications can be utilized for testing and executing any scripts.
    • This ensures that any packaging or adoption of builds based on environment parameters, available development kits or compilations are at par with production grade.
    • The continuous delivery principles are followed as the application would be production ready as it was created and tested in a production-grade environment.
  • Fast and lightweight with containers:
    • Container best practices recommend packaging only what is required. This keeps the image lightweight and single purpose.
    • The lightweight nature of a good image allows for easy scaling, by downloading the image, creating, running and destroying containers quickly and taking as less hardware resources as possible.
    • Containers allow for the easy replacement of a quality gate by simply replacing the image. E.g. if a given DevSecOps tool is not providing the results up to the required organizational standards, the image can be replaced with another one without having to modify several traditional machines or even re-create them.
  • Adoption with containers:
    • Containers provide a great platform for teams to adopt different tools and technologies with various customizations quite easily.
    • Choice of operating systems, tool versions, vendors, languages, development kits with different versions.
    • Teams can have significant shift-left by adopting and using the correct base images as well as DevSecOps images. Developers would be able to locally re-create and run near identical quality gates even before committing the code.
    • Teams would be able to easily switch technologies as the entire SDLC is containerized. Giving them faster adoption of newer versions or new technologies.
    • Containers provide a common solution to use both on-premises as well as in the cloud. From an adoption standpoint, this becomes a boon as organizations and teams are not locked in a technology because of the capabilities of the hosting platform.
    • Container based CI/CD solutions can be easily migrated to the cloud or on-premises or even a hybrid model can exist.
    • A number of out of the box CI/CD providers such as Bitbucket pipelines provide support for containers. A team can easily switch between pipeline providers with minimal efforts if the processes are containerized.
  • Scaling with containers:
    • Containers can scale within seconds as compared to traditional infrastructures that take minutes or even hours in some cases.
    • Containers can be scaled both vertically and horizontally to meet CI/CD requirements.
    • Scaling containers just requires a container orchestrator which most platforms provide, such as AWS ECS, Kubernetes. This makes the management of containers easy as well.

With the availability of cloud, containers provide enhanced cloud utilization and optimization. Consider AWS ECS Fargate with Jenkins as the CI/CD orchestrator of pipelines:

  • On-demand scalable Jenkins agents
  • Containerized agents provide all the benefits of containerization
  • AWS ECS Fargate provides a serverless container engine.
  • No provisioning and maintenance of traditional EC2 machines and AMIs
  • No patching, updates of operating systems, controlling SSH/console access
  • No upfront payments and only pay for the resources used.
  • Pay only for vCPU and memory resources consumed.
  • Instant scaling as containers are created and destroyed in seconds compared to several minutes.

Interested in our Cloud Services?

Please enable JavaScript in your browser to complete this form.
Checkboxes
By submitting this form, you agree that you have read and understand Apexon’s Terms and Conditions. You can opt-out of communications at any time. We respect your privacy.