Containerization enables developers to package software code and its required components to run (e.g., frameworks, libraries, and other dependencies) in a single isolated container. Hence, any software or application within a container can be easily moved and used in distinct infrastructures, regardless of the infrastructure's operating system or environment.
Containers allow software code to be more portable (can easily be moved across platforms and infrastructures) and secure, considering it allows developers to build and deploy applications that are consistent across different operating systems. However, keep in mind that not all software code can be designed in a microservices architecture. i.e. intense UI applications virtualization are rather done at another layer of the hypervisor with Vagrant and others.
Before containers, developers used to create code in a particular computing environment. When they wished to transfer it to a new location (for instance, from Linux to Windows), the code would be highly prone to bugs and other errors. This was a major issue that containers provide a solution to since they abstract the software code away from the host operating system, making it independent and able to quickly run anywhere without worrying about bugs or other issues.
In short, thanks to containers, applications can become encapsulated in independent environments, and the primary benefits are: scalability, quicker deployment, and closer congruity between environments. Not everyone uses containers, but the numbers have shown notorious growth throughout the years.
"The report found that 60% of backend developers around the world are now using containers. Compared to Q2 2019, there has been, on average, an increase of 10 percentage points (pp) in the use of containers." (State of Cloud Native Development, 2020)
The concept of containerization itself is not recent. Still, it arose with the initial release of Docker in 2013 as an open-source technology, which still holds a clear advantage in the containerization market, having 82.39% of its share. In fact, before 2013, Linux was already providing container technologies (Linux Containers or LXC). Still, Docker quickly claimed the throne as the number one, becoming the default container format.
Docker is a widely known containerization platform used to develop, ship, and run any application as a portable and self-sufficient container. It can run virtually anywhere from desktops to cloud environments and data centers.
Further, throughout the years, Docker has developed numerous tools to provide a "fully equipped" platform for everything related to containerization. Yet, it does not mean that all their tools are the main go-to solution, considering the heavy competition and quality technologies developed in the field.
One of the primary tools is the Docker Engine, a runtime environment that enables developers to create and run containers on any development machine. To run a Docker container, one can choose to start with the Docker file, which is a file that explicitly establishes everything required to run the Docker image (e.g., the OS network specifications and file locations).
In turn, the Docker image is a portable static component that can be run on the Docker Engine. Once the containers are built and ready to run everywhere, developers can store or share the container image through containers registries.
However, just to clarify, it is not necessarily required to have the Dockerfile to run a container. Instead, developers can first look up container registries (such as DockerHub and Azure Container Registry) and pull existent images from containers registries, which saves a lot of time and work. Typically, these registries include a massive number of images created publicly. Therefore - and this is the key takeaway - to run a Docker container, one can simply pull an image from a public container registry, or customize images using a Dockerfile.
As we can perceive, Docker provides an open-source solution to pack and distribute containerized applications. However, as the number of containers grows, so does the complexity of managing them. Thus, it is important to ensure a couple of aspects (which fall into some orchestration tasks), namely:
To handle these and other complexities, Docker has developed Docker Swarm, a container orchestration technology. To be more accurate, this (the Docker Swarm) is Docker's technology that is usually compared with Kubernetes, not the entire Docker platform itself.
In fact, Docker is the default and underlying technology for Kubernetes, which natively uses Docker as its runtime and pulls images the same way developers would do manually, that is, using a Docker command. Nonetheless, Kubernetes also supports other alternatives to Docker.
As you may have guessed from the previous paragraph, Kubernetes is a container orchestration technology (e.g., like OpenShift or ECS), and Google introduced it in 2014, a year after Docker's release. Nowadays, it is controlled by the CNCF (Cloud Native Computing Foundation).
Kubernetes was developed to help users schedule, manage, automate deployment and scale containerized applications. It is a technology that handles containerization and respective workloads by tackling the complexities of efficiently dealing with an extensive number of containers across distinct servers.
In that sense, Kubernetes provides an open-source API that regulates how and where containers run. In this technology, containers are grouped into pods, the basic operational unit for Kubernetes. Once grouped, containers and pods can easily be scaled to another state, and developers can control their lifecycle.
Hence, Kubernetes enables the orchestration of virtual machines (VMs) and allows developers to schedule containers to run on those VMs, according to their compute resources and the requirements of each container. Very simply put, when developers require to release code, they simply indicate which cluster is required to update, and the platform handles the connection management.
Moreover, Kubernetes supports a vast number of containerization tools, including Docker, which leads us to the next topic.
As mentioned, if we wished to compare Docker vs. Kubernetes, then a fairer comparison would be between Docker Swarm and Kubernetes, which are both containers orchestration technologies. Truth be told, Docker may lead the market share in containerization, but it is not as successful if we look at the orchestration technology. Kubernetes is more extensive and the leader in that race, having 80.9k stars on Github compared with Docker Swarm's 5.8k stars.
Docker itself and Kubernetes are indeed complementary technologies. Even though both have similar roles, they are actually very different and can work perfectly together. As it is now clear Docker is one of the main underlying technologies of Kubernetes, which in turn is able to completely manage the containers instanced by Docker through a control plane.
Moreover, Kubernetes includes numerous features that are advantageous when handling container orchestration, such as load-balancing, security, networking, a built-in isolation mechanism, self-healing, and the ability to scale across all the nodes that run on the built containers.
We need to understand how useful both are to make the most out of the combination between Docker and Kubernetes. According to the 2020 report on the "State of Cloud Native Development", despite the popularity of containers, not everyone actually uses orchestration technologies to manage them, and the reason for that lies in the fact that not everyone needs them, especially when working with small applications and a low and controllable number of containers.
To be more precise, as the software needs grow, the required applications also have to scale at containerization. Thus, to truly benefit from microservices architecture, all the requirements must be in place. Otherwise, instead of an advantage, containerization becomes another liability in the tech stack.
Therefore, using Kubernetes or a similar tool is not mandatory; it is, however, highly recommended for infrastructures that wish to scale and that must handle a very high number of containers across distributed systems. According to the 2020 report "Containers in the enterprise - Rapid enterprise adoption continues" conducted by the IBM Market Development & Insights, only 15% of the respondents Never Use containers without orchestration solutions.
Further, according to the respondents, these are the top benefits of using orchestration solutions:
As announced upon Kubernetes 1.20 release, "Docker support in the kubelet is now deprecated and will be removed in a future release.". Initially, this caused a bit of panic as many developers thought that this would be the end of Docker and, consequently, the end of the great combination of Docker plus Kubernetes. Fortunately, on the 2nd of December 2020, Kubernetes launched a blog post clarifying that "It's not as dramatic as it sounds".
As they explain, inside a Kubernetes cluster, there is a thing named 'container runtime" that is used to pull and run the container images. Docker is the most popular option for that runtime, and the issue is that it was not designed to be embedded inside Kubernetes.
The container runtime is made to be human-friendly - which makes Docker even greater. For humans. - but not very suitable for a software like Kubernetes. Consequently, Kubernetes needs to use Dockershim to get what it really needs and is containerd. This is far from being the ideal solution for Kubernetes because it is an extra technology that they need to maintain and handle.
In conclusion, what the announcement truly states is that Dockershim is being removed from Kubelet, removing support for Docker as a container runtime. This means that images produced by Docker will continue to work in a developer's Kubernetes cluster, as usual.
Even though it is not necessarily the best approach for every workload or application, the benefits of containerization have captured the attention of many developers and businesses. Some of the top advantages include improved application quality, improved productivity, reduced application downtime, and faster response to changes.
But how can Dockers and Kubernetes work together to make the most out of containerization?
On the one hand, Docker enables developers to package their applications into isolated containers through the command line. Afterward, those applications are able to run across the developers' IT environments.
On the other hand, Kubernetes offers an orchestration solution that schedules and automates containerization tasks, such as management, scaling, deployment, and networking throughout the application's lifecycle.
Therefore, Docker and Kubernetes can - and often should - complement each other. Moreover, combining these technologies with DevOps practices can provide a baseline of microservices architecture that allows for fast delivery as well as scalability of cloud-native applications.
Marketing intern with a particular interest in technology and research. In my free time, I play volleyball and spoil my dog as much as possible.
Security and Cloud Operations expert. Background in Public Transport, Finance, and Government. Usually trading coins on decentralized exchanges :)
People who read this post, also found these interesting: