Getting Familiar with Docker

Photo by Ian Taylor on Unsplash

Getting Familiar with Docker

Introduction to Docker

In common terms, a container is an object for holding or transporting something. In technology, this "concept" has become a very important part of Application Operation, but it needs some intelligent software to create, deploy and manage these "containers".

Introducing Docker, an open-source container tool that does these tasks by effectively reproducing environments to run the Applications and reduce compatibility and dependency issues. The Docker has three important parts:

1) Dockerfile

Dockerfile is a blueprint that is used to build a Docker image (explained next). A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image automatically using Docker. We can write information such as npm install, env variables, ports and where they are listened/exposed, dockerignore, etc.

2) Image

A Docker image contains application code, libraries, tools, dependencies, and other files needed to make an application run. These are immutable snapshots or templates, built in the form of efficient layers, to run docker containers. One image file can be used to spawn the same process multiple times at multiple places to scale containers to an "infinite" workload. Tools to keep in mind: Kubernetes, Docker Swarm, Docker Compose, etc.

3) Containers

Containers are completely isolated environments with their own processes/services, network interface, mounts, etc. But they all will share the same OS Kernel. As the Docker Images are read-only templates, we cannot execute our code onto them directly. So we use Docker Containers which are deployed instances created from those templates. Images can exist without containers, whereas a container needs to run an image to exist. Therefore, containers are dependent on images and use them to construct a run-time environment and run an application.

The strongest feature of Containers was that they were able to do what Virtual Machines were made to do but with less resource utilization, more resource sharing, lesser image and disk sizes, and faster boot times.

Flash Notes for Docker:

  • Docker helps to eliminate "Matrix from Hell" by decoupling the application from the underlying operating system and hardware.

  • We should follow the microservices architecture, with one process per container. If we have multiple processes, we should use multiple containers.

  • When we stop the container, all data is lost. To share that data across multiple containers we use Volumes to persist files.

  • Docker also helps in Load Balancing using Docker Swarm and Ingress mesh and also supports Port Forwarding using --publish, -p, or -P, e.g. docker run -p 5000:8080 imageid.

  • Docker is developed in the Go language and utilizes LXC, cgroups, and the Linux kernel itself. Since it's based on LXC, a Docker container does not include a separate OS; instead, it relies on the OS's own functionality as provided by the underlying infrastructure.

Do share your feedback, any suggestions and critics are welcome :)