blog posts

What Is Docker And How Should We Use It?

What Is Docker And How Should We Use It?

Docker Is A Software Platform That Facilitates The Process Of Building, Executing, Managing, And Distributing Programs. For Some Time Now, Docker Has Been Considered By Teams In The Field Of Duplex And Is Widely Used.

Accordingly, it is essential to get acquainted with the overall framework and use Docker. In the future, one of the critical points that companies refer to in employment ads is the dominance of Docker or its competitor Kubernetes.

What is Docker?

Docker is an open-source project that converts applications in containers into an automated process by implementing an additional abstract layer and virtualizing the operating system level. Simply put, Docker is a tool that allows developers to quickly develop applications in a sandbox called a container and run on the host operating system.

Here, Sandbox is a test range on a system that allows it to run a software program without affecting hardware or software. Docker allows users to package a program with all their Dependencies in a standardized unit for software development.

Unlike virtual machines, containers do not have much overhead and can be used in infrastructure systems and optimally resources.

 

What is Docker Container?

Today, the software world is moving towards implementing software programs in virtual machines, and programs are often run on a guest operating system. This guest operating system runs on the server’s virtual hardware by the host operating system. The container is a standard software mechanism that packages the code with all its dependencies to enable fast and reliable execution of the program developed in different computing environments.

Virtual machines perform well in isolating processes related to applications. With virtual machines, we rarely see a problem with the host operating system that makes other software running on the host operating system run smoothly; Isolation in virtual machines, however, is costly because the computations spent on hardware virtualization to use a guest operating system have significant overhead.

In contrast, containers take a different approach. It’s for Utilizing the low-level capabilities of the host operating system, and the containers provide a great deal of isolation for the virtual machines with less computing power.

The technology was released in 2013 as an open-source Docker engine.

In the above technology, the computational concepts of containers, especially the Linux world’s basic concepts, such as namespaces, are best used. Docker technology is unique in that it focuses on the needs of system developers and operators to isolate system dependencies from their infrastructure.

In the above technology, the computational concepts of containers, especially the Linux world’s basic concepts, such as namespaces, are best used. Docker technology is unique in that it focuses on the needs of system developers and operators to isolate system dependencies from their infrastructure. In the above technology, the computational concepts of containers, especially the Linux world’s basic concepts, such as namespaces, are best used.

Docker technology is unique in that it focuses on the needs of system developers and operators to isolate system dependencies from their infrastructure.

 What is a Container Image in Docker?

Docker Container Image refers to a small, standalone executable software package that includes everything you need to run an application. These include code, runtime, system tools, system libraries, and settings. Containers Images are converted to containers at runtime, and for Docker containers, images are converted to containers when run on a Docker Engine. This conversion process works the same way for Windows and Linux applications. Containers isolate software from their environment, ensuring that the application design and development process is uniform.

Why should we use containers?

Containers provide a powerful packaging mechanism that provides abstract isolation of applications from the actual environment in which they are executed. This isolation allows container-based applications to deploy software quickly and stably, regardless of the nature of the host environment (private data center, public cloud environment, or programmer’s laptop). In this case, developers can create predictable environments isolated from other applications and can run anywhere.

From an operational point of view, in addition to portability, containers offer more granular control over resources, which improves the performance of infrastructure and more efficient use of computational resources. Because of these capabilities, many developers have become interested in using containers and dockers.

What is the difference between a container and a virtual machine?

The advantages of using isolation and resource allocation mechanisms in containers and virtual machines are similar. Still, the two technologies do so differently because the containers virtualize the operating system instead of the hardware virtualization and are more portable. Virtual machines work better.

Virtual machines transform a single server into multiple servers by abstracting physical hardware. A software layer for building a virtual environment called a hypervisor allows various devices to run on a single machine. Each virtual machine has a complete version of its operating system, application, processes and services, required libraries, and the like, which take up several gigabytes of space. In addition, booting virtual machines is time-consuming.

Containers, on the other hand, provide an abstract concept at the application layer so that code and dependencies are all packaged together; For this reason, several containers can run on a standard machine, and it is possible to share the operating system kernel with other containers so that each of them can run in the user space as separate and independent processes.

In addition, containers take up less space than virtual machines. In addition, containers can support more applications, significantly reducing the need for virtual machines and operating systems.

How to use Docker?

To work with Docker, you must have basic skills related to developing web-based and Linux applications. If you plan to learn Linux to use Docker, we suggest that you consider the following concepts: another thing to keep in mind is to create an account on each of the Amazon and Docker Hub websites.

  • Cgroups: A process isolation method for grouping processes so that containers can work without interference. 
  • Namespaces: The namespace is used to divide the network stack between containers. 
  • COW Title Copy on Write: Resource management method for controlling images in read-only mode.
  • Volumes and Bind Mounting: Use a repository for managing data in containers and performing tasks such as connecting host system files. 

Introduction to Networking: We recommend that you increase your knowledge of computer networking concepts and technologies such as sockets, routing, IP protocols, bridges, virtual networks, Iptables, ports, client-server architecture, and so on so that you can use Docker without difficulty. . In general, Docker is made up of two parts, the client and the server, which communicate with each other through a socket, network, or file. In addition, Docker uses bridges and NAT to build virtual networks on a computer. Docker uses a firewall called iptables to transfer packets between containers and the Internet to keep the communication mechanism safe. As you can see, mastering the concepts of PC and Linux networks helps you better understand and use Docker.

Install Docker on Windows

It is not difficult to prepare all the necessary tools and prerequisites on a personal computer because it is easy to set up and run on any operating system. We will examine how to install Docker on Windows and then Linux in the following.

First, you need to go to https://www.docker.com/products/docker-desktop  and click the Download for Windows button to download the installation file, which is about 500 MB in size. Additionally, it’s best to create an account to use Docker Hub. To do this, click the Get Started button in the upper right corner of the link above to enter the account creation page (Figure 1).

After downloading, it is time to install the Docker Desktop Installer file. The installation process is simple, and you have to click on the Install option. After completing the installation, boot the system to run Docker Desktop automatically. At first, you may see an error requiring you to install the Linux kernel update package on Windows.

Suppose you see this error message, download and install the Linux kernel update package. After this, click the Continue button in the Docker Desktop application error window. The Docker Engine then loads automatically into the Docker Desktop application.

After completing the above steps, a tutorial on starting with Docker within the Docker Desktop program will start automatically. It will guide the user to start working with Docker on Docker Desktop.

Install Docker on Linux

Docker can be installed on various Linux distributions. In the following, we will examine the installation method on the Ubuntu distribution with the apt package manager. Docker installation instructions for other Linux distributions can find the Docker documentation. The most accessible tool to install the latest version of Docker is the Linux package manager. First, you must add the Docker Repository, update the package list, and install Docker. These processes are performed in the following order:

sudo apt-get update

sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release

Next, it’s time to add the GPG key to the Docker folder:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg –dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Next you need to add the folder to the source and update the package list:

echo “deb [arch = amd64 signed-by = / usr / share / keyrings / docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $ (lsb_release -cs) stable” | sudo tee /etc/apt/sources.list.d/docker.list> / dev / null

sudo apt-get update

Finally, we install Docker using the following command:

sudo apt-get install docker-ce docker-ce-cli containerd.io

After installing Docker, use the docker run hello-world: latest command to ensure that it is installed without any problems.

Hello World app to start docker training

Once everything is ready, it is time to run Docker. This section runs a Busybox container on the system and starts with the docker run command. Busybox is a software package that provides several Unix-based tools in an executable file. To get prompted, you must run the following command on the command line:

$ Docker pull busybox

Depending on how Docker is installed on the system, a Permission Denied error may be displayed after executing the above command. If you are using Linux and see this error, you must use the Sudo keyword before the above command. If you are using a Mac operating system, you need to ensure that the Docker Engine is running.

Additionally, you can create a Docker Group to avoid this problem. The pull command receives the Busybox image from the Docker registry and saves it in the system. You can use the docker images command to view a list of ideas on the system:

$ docker images

Now to execute the busybox image, use the docker run command as follows:

$ Docker run busybox

By executing the above command, you do not see anything, and it seems that nothing has happened; But a lot has happened behind the scenes. When you use the run command, the client finds the Docker Image (Busybox), loads the container, and executes an order in that container.

No command is referenced when the Docker run busybox command is executed. Therefore, after loading, the container executes an empty authority and exits. That is why nothing happens with the execution of the above order. Here is the power to send to the container:

$ docker run busybox echo “hello from busybox.»

This time you see the following output:

hello from busybox

By executing the docker run command, an understandable output is displayed. In this case, the Docker client executes the echo command in the Busybox container and exits. As you can see, all of this is done quickly. Now it’s time to move on to the docker ps command. This command shows all the containers that are currently running:

$ Docker ps -a

Execution of the above command shows the following output:

Here is a list of all the containers run so far. The Created column showed when created these containers. Is there a way to execute more than one command in a container? The answer is yes. Execute multiple commands using

docker run is done as follows:

$ docker run -it busybox sh

/ # ls

bin dev etc home proc root sys tmp usr var

/ # uptime

 05:45:21 up 5:58, 0 users, load average: 0.00, 0.01, 0.04

Executing the run command with the -it switches provides the link needed to communicate with a tty container. You are now able to perform the desired commands. Use the docker run-help syntax to get acquainted with other uses and switches of the run command.

How to remove containers in Docker?

As you can see, it is possible to get information about a container even after leaving it by command.

Docker ps –and obtained. The critical thing to note about containers is that not releasing the containers can take up disk space for no reason. Therefore, it is better to clean it after finishing work with a container. The docker rm command is used to do this. When using the rm command, you must copy the container ID and paste them in the following order:

$ docker rm 305297d7a235 ff0a5c3750b9

When a container is removed, its ID is displayed at the output. It is also possible to remove multiple Docker containers simultaneously. If you want to delete multiple containers simultaneously, copying and pasting the IDs of each container is not fun. To solve this problem, you must use the following command:

$ docker rm $ (docker ps -a -q -f status = exited)

The above command removes containers that have exited status. We should note that the -q switch returns only numeric identifiers, and the -f switch provides output based on the above conditions. Another thing to mention is the -rm switch. The above command can use in the docker run command.

The command automatically removes the container at exit time. Using this switch is helpful for containers that run only once. Newer versions of Docker can use the docker container prune command to clear all stopped containers:

$ docker container prune

In this case, a warning message is displayed that the above command clears all containers. If you want to delete images you no longer need, you must use the docker rm command.