Docker Architecture
Docker Architecture
The Docker architecture uses a client-server model
and comprises the Docker Client, Docker Host, Network and Storage components,
and the Docker Registry / Hub. Let's look at each of these in some detail.
Docker Client
The Docker client enables users to interact with
Docker. The Docker client can reside on the same host as the daemon or connect
to a daemon on a remote host. A docker client can communicate with more than
one daemon. The Docker client provides a command line interface (CLI) that
allows you to issue a build, run, and stop application commands to a Docker
daemon.
The main purpose of the Docker Client is to provide
the tool to retrieve the image from a registry and to run it on a Docker host.
Common commands issued by a client are:
Docker build
Docker pull
Docker run
DockerHost
The Docker host provides a complete environment to
execute and run applications. It comprises of the Docker daemon, Images,
Containers, Networks, and Storage. As previously mentioned, the daemon is
responsible for all container-related actions and receives commands through the
CLI or the REST API. It can also communicate with other daemons to manage its
services. The Docker daemon pulls and builds container images as requested by
the client. Once it pulls a requested image, it builds a file of the build
file. The build file can also include instructions to be sent to the local
command line when the container is built.
Docker Objects
Various objects are used in the assembling of your
application. The essential requisite Docker objects are:
Images
Images are a read-only binary template used to build
containers. Images also contain metadata that describe the container's
capabilities and needs. Images are used to store and ship applications. An
image can be used for the current configuration. Container images can be shared
across the groups using a private registry registry, or shared with the public
registry like Docker Hub. Images are a part of the Docker experience as they
did not work before.
Containers
Containers are encapsulated environments that you
run applications. The container is defined by the image and any additional
configuration options provided on the container, including and not limited to
network connections and storage options. Containers only have access to
resources that are defined in the image. You can also create a new image based
on the current state of a container. Since the containers are much smaller than
VMs, they can be spun up in a matter of seconds, and result in much better
server density.
Networking
Docker implements networking in an
application-driven way and provides various options while maintaining enough
abstraction for application developers. There are basically two types of
networks available - the default Docker network and user-defined networks. By
default, you get three different networks on the installation of Docker - none,
bridge, and host. The none and the host networks are part of the network stack
in Docker. The bridge network automatically creates a gateway and the IP subnet
and all the containers that belong to this network can communicate to each
other via IP addressing. This network is not commonly used as it does not scale
well and has constraints in terms of network usability and service discovery.
The other type of networks is user-defined networks.
Administrators can configure multiple user-defined networks. There are three
types:
Bridge network:
Similar to the default bridge
network, a user-defined Bridge network is different that there is no need for
port forwarding to communicate with each other. The other difference is that it
has full support for automatic network discovery.
Overlay network: An overlay network is used when you
need to communicate with each other, as in the case of a distributed network.
However, a caveat is that the swarm mode must be enabled for a cluster of
Docker engines, known as a swarm, to be able to join the same group.
Macwlan network:
When using Bridge and Overlay
networks a bridge resides between the container and the host. A Macvlan network
removes this bridge, external networks without dealing with port forwarding.
This is used by using MAC addresses instead of IP addresses.
Storage
You can store data in a container, but it needs a
storage driver. Being non-persistent, it perishes whenever the container is not
running.
Data Volumes:
Data Volumes provide the
ability to create persistent storage, with the ability to rename volumes, list volumes,
and also list the container that is associated with the volume. Data Volumes
sit on the host file system, outside the containers copy on write mechanism and
are fairly efficient.
Data Volume Container:
A Data Volume Container is an alternative approach wherein a
dedicated container hosts a volume and to mount that volume to other
containers. In this case, the volume container is independent of the
application container and therefore can be shared across more than one
container.
Directory Mounts:
Another option is to mount a host’s local directory into a container.
In the previously mentioned cases, the volumes would have to be within the
Docker volumes folder, whereas when it comes to Directory Mounts any directory
on the Host machine can be used as a source for the volume.
Storage Plugins:
Storage Plugins provide the ability to connect to external storage
platforms. These plugins map storage from the host to an external source like a
storage array or an appliance. A list of storage plugins can be found on
Docker’s Plugin page.
There are storage plugins from various
companies to automate the storage provisioning process. For example,
• HPE
3PAR
• EMC
(ScaleIO, XtremIO, VMAX, Isilon)
• NetApp
There are also plugins that support public
cloud providers like:
• Azure
File Storage
• Google
Compute Platform.
Docker Registries
Docker registries are services that
provide locations from where you can store and download images. In other words,
a Docker registry contains repositories that host one or more Docker Images.
Public Registries include Docker Hub and Docker Cloud and private Registries
can also be used. Common commands when working with registries include:
docker push
docker pull
docker run
Service Discovery
Service Discovery allows containers to
find out about the environment they are in and find other services offered by
other containers.
It is an important factor when trying to
build scalable and flexible applications.
Conclusion
Now that we have seen the various components of the Docker architecture and how they work together, we can begin to understand the rise in popularity of Docker containers, DevOps uptake and microservices. We can also see how Docker helps simplify infrastructure management by making underlying instances lighter, faster, and more resilient. Additionally, Docker separates the application layer from the infrastructure layer and brings much-needed portability, collaboration, and control over the software delivery chain.
To getting expert-level training for Web Designing Training in your location - DevOps training in Chennai | DevOps training in Bangalore | DevOps training institute in Bangalore
Comments
Post a Comment