Provides an additional layer of abstraction

and automation of operating-system-level virtualization on Windows and Linux.

Looks pretty

Figure 1: Docker Logo

Isolates an application's view

of the operating environment, including process trees, network, user IDs and mounted file systems, while the kernel's cgroups provide resource limiting, including the CPU, memory, block I/O, and network.

Incentive

Shipping code to the server is too hard

Deployment and Delivery Challenges

  • Lots of small changes
  • Heterogeneous platforms and languages
  • Environments (Development, Production)
  • Zero downtime
  • Rollback
  • Monitoring
  • Infrastructure as Code
  • Continuous Delivery
  • Infrastructure and Application Metrics

Microservices as a Remedy

One of the biggest use cases and strongest drivers behind the uptake of containers are microservices.

Microservices are a way of developing and composing software systems such that they are built out of small, independent components that interact with one another over the network. This is in contrast to the traditional monolithic way of developing software, where there is a single large program, typically written in C++ or Java. When it comes to scaling a monolith, commonly the only choice is to scale up, where extra demand is handled by using a larger machine with more RAM and CPU power. Conversely, microservices are designed to scale out, where extra demand is handled by provisioning multiple machines the load can be spread over. In a microservice architecture, it’s possible to only scale the resources required for a particular service, focusing on the bottlenecks in the system. In a monolith, it’s scale everything or nothing, resulting in wasted resources.

In terms of complexity, microservices are a double-edged sword. Each individual microservice should be easy to understand and modify. However, in a system composed of dozens or hundreds of such services, the overall complexity increases due to the interaction between individual components. The lightweight nature and speed of containers mean they are particularly well suited for running a microservice architecture. Compared to VMs, containers are vastly smaller and quicker to deploy, allowing microservice architectures to use the minimum of resources and react quickly to changes in demand.

The Twelve-Factor App

is a methodology for building software-as-a-service apps that fit into these:

Requirements declaration points

  • Use declarative formats for setup automation, to minimize time and cost for new developers joining the project
  • Have a clean contract with the underlying operating system, offering maximum portability between execution environments
  • Are suitable for development on modern cloud platforms, obviating the need for servers and system administration
  • Minimize divergence between development and production, enabling continuous deployment for maximum agility
  • And can scale up without significant changes to tooling, architecture, or development practices

Codebase

One codebase tracked in revision control, many deploys

Dependencies

Explicitly declare and isolate dependencies

Config

Store config in the environment

Build, release, run

Strictly separate build and run stages

Processes

Execute the app as one or more stateless processes

Port binding

Export services via port binding

Concurrency

Scale out via the process model

Disposability

Maximize robustness with fast start-up and graceful shutdown

Dev/prod parity

Keep development, staging and production as similar as possible

Logs

Treat logs as event streams

Admin processes

Run admin/management tasks as one-off processes

The Ultimate Goal

A single deployable file, which is under version control, verified by checksum and simply runs without any additional dependencies.



blog comments powered by Disqus

Published

19 June 2017

Categories

CI CD

Tags