Everybody is talking in our days about containers (at least in the IT business).
To be honest, it took me also some time to understand that.
Because we have already a solution: Virtualisation.
With Virtualisation we are able to abstract applications from the hardware. That is fine.
So why the heck do we need now containers?
I will try to explain that in this blog.
Key facts of Virtualisation
With Virtualisation (no matter what version you used, VMWare, Xen, KVM) we are able to install a separate OS that is encapsulated and transportabel, and inside of that OS we can install our desired application. But that required already some efforts:
- Create the VM
Install the OS in a VM
- Update the OS
- Resolve the dependencies for the application (library RPM XY is required and so on)
- Install the application
Also the lifecycle of such a VM needs some steps:
- Update the OS
- Update the dependencies
- Update the application
What is a container. what is a container image
First we must decide between a container image and a container.
We start a container on a computer with the command “docker run name-of-the-image”.
This will create the container in the kernel (separate namespace, separate network namespace, and other separations from the host system) and use the content of the container image to start processes inside of the container.
A container image is, simply said, an image that includes the application and its complete dependencies like libraries.
The only thing the OS must provide is a compatible container runtime engine.
There are several of them in the Linux world, beginning with Docker, containerd, CRI-O for example.
They are all capable to start a standardised container image, like you can download from docker hub.
This container runtime engines just create the encapsulated environment in the kernel to start the container. They use standard Linux kernel techniques to create that environment for the container.
Comparison: container versus a virtual machine
A virtual machine starts always a separate complete OS with it’s own kernel, while a container uses the OS of the host system and it’s kernel. That means the virtual machine has a big performance overhead.
On a specific hardware you might be able to run 10 VMs with applications inside, while compared to that you can start on the same hardware 50 containers with the same applications. Thats a big difference!
A virtual machine always contains a full OS and the application. That means the image of a virtual machine has a size of 10GB or much more.
Compared to that a container brings only the application itself and the shared libraries the application is depending on. That’s a few MB, maybe up to a few 100MB.
So a container has a much smaller footprint
A container will run anywhere, as long as there is a compatible container runtime environment installed.
For example, a developer creates his application in his home office on a Ubuntu system and puts it in a container.
This container will start the same way like it started on the developers notebook in a Kubernetes cluster in the companies enterprise environment. No matter what OS or libraries are installed there.
If the company decides to move from a Docker Swarm cluster to a Redhat Openshift cluster to a Rancher cluster, the container will still run!
If the developer creates a new version of his application, he just puts it into an updated container image and the image can be rolled out in the companies production environment.
Or imagine: you want to install a software on your linux computer.
Normally you do that via RPM or similar package management systems.
But this requires dependencies: you need to have installed the necessary libraries in a specific version for this application.
And that creates dependencies: Maybe later there is an update for this library, but it cannot be installed because your software relies on that specific old version.
Or there is a new version for your application, you try to install it but it won’t run because your OS does not yet provide an update for the library which the new version of your application requires.
With containers, you just enter a “docker run myapplication”; the container image with myapplication will be downloaded from the docker hub. The container image includes all libraries that the application needs to run. And the container will start without problems, no matter what library versions are installed on your system.
The trend in our day is to split up applications in microservices and create for every microservice a separate container.
This containers then communicate with each other via network and provide together the same service as before the big application.
In the first view this may have a overhead: why not pack everything necessary in a single application?
But if you look deeper in it then you will see there is no real overhead: a application that is split up in 5 microservices in containers don’t load more components into RAM then a big application.
In a big application (aka Monolith) the processes talk with each other via InterProcessCommunication, while in 5 containers they talk via network.
But the 5 containers can be distributed over several hosts. The big application can only run on one host. That gives a big advantage when it comes to scale out!
And there is another big advantage: you can treat the containers with the microservices as separate parts of the application and develop and update them separately! That makes development and updates much easier and less prone to errors! It speeds up development and testing processes also enormously and creates totally new possibilities for developers like DevOps, Continuous Integration and Continuous Deployment.
5: Cloud ready applications
You can see so called “Cloud ready applications” as the next evolution of microservices.
We talk here about microservices that are per design created to scale out. Then we call them “cloud ready”.
What does that mean?
Again, imagine the big application. It runs on on host or in one VM.
If the load in the application starts to increase, the application will do “multithreading”:
It will start more processes to handle the increasing load. But it can start the processes only local on the OS on the local hardware.
In a cloud ready application this will not happen. A cloud ready application is designed to be load balanced from outside.
There is a load balancer in front of a cloud ready application that is intende to scale out. With little load there might be only 3 container instances running on different hosts with the cloud ready microservice, and the load balancer is distributing the incoming requests from users to this 3 containers.
But when the load starts to increase, more users are accessing the application, the management system will recognize that, start more containers on the same host or other hosts in the cluster and include them automatically in the load balancing, so that the increasing load can be handled.
If the load is going down again, the management system recognised that also and stops some containers and takes them out from the load balancing.
So we get a automatic up-and-down-scale system!
That can go even further to an automatic start up new nodes in the cluster to handle more containers and more load and vice versa stop of containers and shut down of nodes when the load is low.
Containers are able to close the gap between standard software installation, for example with RPM and virtual machines.
You have the isolation and one step install feature like in virtual machines, but not the overhead and size of virtual machines.
They can be used on single hosts as well as on clusters. They can be used locally or in the cloud. They are universal.
I hope you can see now the advantages that containers are able to bring us.
Maybe in the future containers will replace at least partially the common known package management systems.
Imagine a microOS based on Linux that brings nothing more then a kernel, a shell and container runtime environment.
All dependency worries gone forever 🙂
Cu soon here again 🙂