What is docker? How to use it?(Docker Complete Guide)
Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. Because all of the containers share the services of a single operating system kernel, they use fewer resources than virtual machines.
The service has both free and premium tiers. The software that hosts the containers name is Docker Engine. It was first started in 2013 and is developed by Docker, Inc.
Developers can create containers without this technology, but the platform makes it easier, simpler, and safer to build, deploy and manage containers. Docker is essentially a toolkit that enables developers to build, deploy, run, update, and stop containers using simple commands and work-saving automation through a single API.
Docker also refers to Docker, Inc. (link resides outside IBM), the company that sells the commercial version of Docker, and to the Docker open source project (link resides outside IBM), to which Docker, Inc. and many other organizations and individuals contribute.
How containers work, and why they’re so popular
Containers are possible by process isolation and virtualization capabilities create into the Linux kernel. These capabilities – such as control groups (Cgroups) for allocating resources among processes, and namespaces for restricting a processes access. Enable multiple application components to share the resources of a single instance of the host operating system in much the same way that a hypervisor enables multiple virtual machines (VMs) to share the CPU, memory and other resources of a single hardware server.
As a result, container technology offers all the functionality and benefits of VMs – including application isolation, cost-effective scalability, and disposability – plus important additional advantages:
Lighter weight: Unlike VMs, containers don’t carry the payload of an entire OS instance and hypervisor; they include only the OS processes and dependencies necessary to execute the code. Container sizes are measured in megabytes (vs. gigabytes for some VMs). Make better use of hardware capacity, and have faster startup times.
Greater resource efficiency: With containers, you can run several times as many copies of an application on the same hardware as you can using VMs. This can reduce your cloud spending.
Improved developer productivity: Compared to VMs, containers are faster and easier to deploy, provision and restart. This makes them ideal for use in continuous integration and continuous delivery (CI/CD) pipelines .And a better fit for development teams adopting Agile and DevOps practices.
Alternatives to Docker
Linux containers have facilitated a massive shift in high-availability computing. There are many toolsets out there to help you run services, or even your entire operating system, in containers. The Open Container Initiative (OCI) is an industry standards organization that encourages innovation while avoiding the danger of vendor lock-in. Thanks to the OCI, you have a choice when choosing a container toolchain, including Docker, CRI-O, Podman, LXC, and others.
Afterward by design, containers can multiply quickly, whether you’re running lots of different services or you’re running many instances of a few services. Also you should you decide to run services in containers, you probably need software designed to host and manage those containers. This is broadly known as container orchestration. While Docker and other container engines like Podman and CRI-O are good utilities for container definitions and images, it’s their job to create and run containers, not help you organize and manage them. Projects like Kubernetes and OKD provide container orchestration for this technology, Podman, CRI-O, and more.
When running any of these in production, you may want to invest in support through a downstream project like OpenShift (which is based on OKD.)
The open source components of this technology are gathered in a product called Docker Community Edition, or docker-ce
. These include the Docker engine and a set of Terminal commands to help administrators manage all the this technology containers they are using. You can install this toolchain by searching for docker
in your distribution’s package manager.
Why use Docker?
this technology is so popular today that “Docker” and “containers” are to use interchangeably. But the first container-relate technologies were available for years — even decades — before Docker was publish to the public in 2013.
Most notably, in 2008, LinuXContainers (LXC) was implemented in the Linux kernel, fully enabling virtualization for a single instance of Linux. While LXC is still used today, newer technologies using the Linux kernel are available. Ubuntu, a modern, open-source Linux operating system, also provides this capability.
this technology enhanced the native Linux containerization capabilities with technologies that enable:
Improved—and seamless—portability:
While LXC containers often reference machine-specific configurations, this technology containers run without modification across any desktop, data center and cloud environment.
- Even lighter weight and more granular updates: With LXC, multiple processes can be combined within a single container. With It containers, only one process can run in each container. This makes it possible to build an application that can continue running while one of its parts is taken down for an update or repair.
- Automated container creation: Docker can automatically build a container based on application source code.
- Container versioning: Docker can track versions of a container image, roll back to previous versions, and trace who built a version and how. It can even upload only the deltas between an existing version and a new one.
- Container reuse: Existing containers can be used as base images—essentially like templates for building new containers.
- Shared container libraries: Developers can access an open-source registry containing thousands of user-contributed containers.
Today Docker containerization also works with Microsoft Windows server. And most cloud providers offer specific services to help developers build, ship and run applications containerized with Docker.
For these reasons, this technology adoption quickly exploded and continues to surge. At this writing, Docker Inc. reports 11 million developers and 13 billion container image downloads every month (link resides outside IBM).
Docker tools and terms
Some of the tools and terminology you’ll encounter when using this technology include:
DockerFile
However every Docker container starts with a simple text file containing instructions for how to build the Docker container image. DockerFile automates the process of Docker image creation. It’s essentially a list of command-line interface (CLI) instructions that Docker Engine will run in order to assemble the image.
Docker images
It images contain executable application source code as well as all the tools, libraries, and dependencies that the application code needs to run as a container. When you run the Docker image, it becomes one instance (or multiple instances) of the container.
It’s possible to build a Docker image from scratch, but most developers pull them down from common repositories. Also multiple it images can be get create from a single base image. And they’ll share the commonalities of their stack.
Docker images are for of layers, and each layer corresponds to a version of the image. Whenever a developer makes changes to the image, a new top layer is here, and this top layer replaces the previous top layer as the current version of the image. Previous layers are save for rollbacks or to be re-use in other projects.
Each time you create a container from a Docker image. Yet another new layer getting a call the container layer is there.
Also changes made to the container—such as the addition or deletion of files—are saved to the container layer .Only and exist only while the container is running. This iterative image-creation process enables increased overall efficiency. Since multiple live container instances can run from just a single base image. And when they do so, they leverage a common stack.
Containers
this technology containers are the live, running instances of Docker images. While Docker images are read-only files, containers are live, ephemeral, executable content. Users can interact with them, and administrators can adjust their settings and conditions using it commands.
Docker Hub
Docker Hub (link resides outside IBM) is the public repository of Docker images that calls itself the “world’s largest library and community for container images.” It holds over 100,000 container images sourced from commercial software vendors, open-source projects, and individual developers. It includes images that have been produce by this technology. And certified images belonging to the this technology Trusted Registry, and many thousands of other images.
All this technology users can share their images at will. They can also download predefined base images from the Docker filesystem. To use as a starting point for any containerization project.
Docker daemon
this technology daemon is a service running on your operating system, such as Microsoft Windows or Apple MacOS or iOS. This service creates and manages your Docker images for you. Using the commands from the client, acting as the control center of your Docker implementation.
Docker registry
this technology is a scalable open-source storage and distribution system for it images. This is the registry enables you to track image versions in repositories, using tagging for identification. This is accomplishement using git, a version control tool.