Skip to main content

How to Install and Configure Docker on Different Operating Systems, Build, Run, and Manage Docker Containers with Basic Commands, Use Docker Images and Dockerfiles to Create Customized Containers ,Docker Compose to Define and Run Multi-Container Applications , Docker Networking to Connect Containers and Hosts , Docker Registries to Store and Share Images

DOCKER ENGINE


                                       

Dependencies are the software or libraries that are required to install or run another software. For example, to install Jenkins, you need to install Java first. To install Ansible, you need to install Python first.

Sometimes, different software may require different versions of the same dependency. For example, some software may need Python 2, while others may need Python 3. However, you cannot have two versions of Python on the same operating system.

Docker is a software that solves this problem by creating isolated environments called containers. Each container can have its own dependencies and software without affecting the others.

Docker uses the base kernel of the operating system to create containers. Therefore, you can only run applications that are compatible with the base kernel. For example, if you have a Windows-based operating system, you can only run Windows applications in Docker containers. If you have a Linux-based operating system, you can only run Linux applications in Docker containers.

Docker has several components, such as:

  • Dockerfile: A text file that contains instructions to build a Docker image.
  • Docker image: A binary file that contains the dependencies and software for a container.
  • Docker container: A running instance of a Docker image.
  • Docker Hub: A repository of Docker images that you can download or upload.

To use Docker on Windows, you need to install Docker and Kitematic tools. Kitematic is a graphical user interface for managing Docker containers on Windows.

 

To create a container, follow these steps:

  • Launch an Ubuntu HVM LTS 64-bit x86 instance with all TCP open, 16 GB storage size, and 1 GB RAM.
  • Connect to the instance and switch to the root user with sudo su - root.
  • Update the package index files on Ubuntu with apt-get update.
  • Install Docker with apt install docker.io.

Docker has three main components:

  • Dockerfile: A text file that contains instructions to build a Docker image.
  • Docker image: A binary file that contains the dependencies and software for a container.
  • Docker container: A running instance of a Docker image.

To install an image from Docker Hub, use docker pull <image_name>.

For example, to install the httpd image, use docker pull httpd.

With the help of an image, you can launch multiple containers with different port mappings.

For example, to create a container with the httpd image and map port 7070 of the host to port 80 of the container, use docker run -itd -p "7070:80" httpd.

You can also give a name to your container with the --name option. The name should be unique for each container.

For example, to create a container named webserver with the httpd image and map port 9090 of the host to port 80 of the container, use docker run -itd --name webserver -p "9090:80" httpd.

To check the status of your containers, use docker ps. To see all containers, including the stopped ones, use docker ps -a.

To access your containers from a web browser, enter the public IP address of your instance followed by the port number of the host.

For example, if your instance has the IP address 43.205.216.140 and you have mapped port 7070 of the host to port 80 of the container, enter http://43.205.216.140:7070 in your browser.

To stop a container, use docker stop <container_name> or docker stop <container_ID>.

To start a stopped container, use docker start <container_ID>.

To delete a container, use docker rm <container_name> or docker rm <container_ID>. You need to stop the container first before deleting it.

To force remove a container without stopping it, use docker rm -f <container_ID>.

To enter into a container and execute commands inside it, use docker exec -it <container_ID> /bin/bash.

For example, to install vim inside a container, use docker exec -it <container_ID> /bin/bash and then apt install vim.

To exit from a container, use exit.

Note: When you stop the instance, all containers also stop.

To delete an image, use docker rmi <image_name>:<version>. You can only delete an image if no container is running on that image.

To see the details of a container, use docker inspect <container_name> or docker inspect <container_ID>.

To see what changes happened in a container, use docker logs <container_name> or docker logs <container_ID>.

To see the statistics of containers, use docker stats for all containers and docker stats <container_name> or docker stats <container_ID> for a specific container.

To see the processes running in a container, use docker top <container_name> or docker top <container_ID> for read-only mode or use docker exec -it <container_ID> /bin/bash and then use top for live mode.

To create an AMI from a container, follow these steps:

  • Create a container with the desired image and configuration. For example, to create a container with the httpd image, use docker run -itd httpd.
  • Enter into the container and install the software or libraries you need. For example, to install vim and git inside the container, use docker exec -it <container_ID> /bin/bash and then use apt install vim and apt install git.
  • Exit from the container with exit.
  • Commit the changes to a new image with docker commit <container_ID> <image_name>:<version>. For example, to create an image named myimage with version 1.0.1 from the container ID d90ce38d2ea5, use docker commit d90ce38d2ea5 myimage:1.0.1.

To share your image with someone else, you have two options:

  • Upload your image to Docker Hub (remote):
    • Create a Docker account on the web browser.
    • Log in to your Docker account on the Ubuntu machine with docker login and enter your username and password (token).
    • Create a tag for your image with your Docker account name and repository name. For example, to create a tag for myimage:1.0.1 with the account name alwaysdhina and the repository name httpd_vim_git, use docker tag myimage:1.0.1 alwaysdhina/httpd_vim_git.
    • Push your image to Docker Hub with docker push <tag_name>. For example, to push alwaysdhina/httpd_vim_git, use docker push alwaysdhina/httpd_vim_git.
    • You can see your image on your Docker Hub account under the repository section.
    • You can also see the pull command for your image under the tag section of the repository.
  • Save your image to a local file (local):
    • Save your image to a tar file with docker save -o /usr/local/<file_name_backup>.tar <image_name>:<version>. For example, to save myimage:1.0.1 to a file named myimagebackup.tar, use docker save -o /usr/local/myimagebackup.tar myimage:1.0.1.
    • You can share this file with someone else through any means.
    • To load the image from the file, use docker load -i <backup_file_name>.tar. For example, to load the image from myimagebackup.tar, use docker load -i myimagebackup.tar.
    • You can see the image on your local machine with docker images.

To see what commands you have used, use history.

Dockerfile and volumes are two important concepts in Docker.

  • Dockerfile: A text file that contains instructions to build a Docker image. You can specify the base image, the dependencies, the software, the commands, and the configuration for your image in a Dockerfile. You can use docker build command to create an image from a Dockerfile.
  • Volumes: A way to persist and share data between containers and the host machine. Volumes are directories or files that are mounted to a container and are not affected by the container’s lifecycle. You can use volumes to store data that you want to keep after the container is deleted, or to share data between different containers.

There are two ways to create a container with a volume attached:

  • Way 1: Use the -v option with docker run and specify the local directory and the container directory for syncing data. For example, to create a container with the httpd image and map port 70 of the host to port 80 of the container, and sync the data between /root/html on the host and /usr/local/apache2/htdocs on the container, use docker run -itd -p "70:80" -v "/root/html:/usr/local/apache2/htdocs" httpd. Whatever changes happen in the local directory will also be reflected in the container directory, and vice versa. If you delete data inside the container, it will also be deleted in the local directory. But if you delete the container, the data will still remain in the local directory. You can use docker volume ls to see the volumes on your machine.

  • Way 2: Use docker volume create command to create a volume with a name and a mount point. For example, to create a volume named my-volume, use docker volume create my-volume. You can use docker volume inspect my-volume to see the details of the volume, such as its mount point. Then, use --mount option with docker run and specify the source as the volume name and the destination as the container directory. For example, to create a container with the httpd image and map port 10 of the host to port 80 of the container, and mount the my-volume volume to /usr/local/apache2/htdocs on the container, use docker run -itd --mount source=my-volume,destination=/usr/local/apache2/htdocs -p "10:80" httpd. The data in the volume will be synced with the data in the container directory. If you delete data inside the container, it will also be deleted in the volume. But if you delete the container, the volume will still remain on your machine. You can use docker volume ls to see the volumes on your machine.

 

Dockerfile is a text file that contains instructions to build a Docker image. You can use a Dockerfile to specify the base image, the dependencies, the software, the commands, and the configuration for your image. You can use docker build command to create an image from a Dockerfile.

Here is an example of a Dockerfile that creates an image with CentOS as the base image, installs httpd, and labels the image with a name:

# Use CentOS as the base image

FROM centos

 

# Label the image with a name

LABEL Name="Dhinakran"

 

# Replace the mirrorlist with baseurl in yum repos

RUN sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*

RUN sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*

 

# Install httpd

RUN yum install httpd -y

To create an image from this Dockerfile, save it in a directory (for example, /root/test/) and run the following command:

# Create an image named Dimg with version v2 from the Dockerfile in /root/test/

docker build -t Dimg:v2 /root/test/

You can see the image on your machine with docker images command. For example, you may see something like this:

REPOSITORY   TAG       IMAGE ID       CREATED          SIZE

Dimg         v2        3dff28a2a576   59 seconds ago   280MB

centos       latest    5d0da3dc9764   23 months ago    231MB

 

The -itd option in the docker run command stands for interactive, tty, and detached.

  • interactive means that the container will be able to accept input from the user. This is useful if you need to run a command in the container that requires user input, such as a script that asks for the user's name.
  • tty means that the container will have a pseudo-terminal. This allows you to interact with the container's shell, such as by running ls or cd commands.
  • detached means that the container will run in the background. This means that you can run the container and then immediately exit the command prompt, without having to wait for the container to finish running.

So, the docker run -itd -name myser -p "from:to" httpd command will create a new container from the httpd image, and it will be:

  • Interactive, so you can enter input into the container.
  • TTY enabled, so you can interact with the container's shell.
  • Detached, so the container will run in the background.
  • Named myser.
  • Exposed on port from to port to on the host machine.

The docker exec -it CONTAINER-ID /bin/bash command will connect to an existing container and open a bash shell inside it. The CONTAINER-ID is the ID of the container that you want to connect to.

The docker container prune command will delete all stopped containers.

 

To access a container only through SSH, follow these steps:

  • Create a Dockerfile that installs SSH and configures the root login and password. For example, you can use the following Dockerfile:

# Use Ubuntu 16.04 as the base image

FROM ubuntu:16.04

 

# Label the image with a name

LABEL Name="Dhinakran"

 

# Update the package index files on Ubuntu

RUN apt-get update

 

# Install wget and openssh-server

RUN apt-get install wget openssh-server -y

 

# Replace the prohibit-password option with yes in sshd_config

RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config

 

# Set the root password to root123

RUN echo 'root:root123' | chpasswd

 

# Create the /var/run/sshd directory

RUN mkdir /var/run/sshd

 

# Run the ssh daemon in the foreground

CMD ["/usr/sbin/sshd", "-D"]

 

# Expose port 22 for SSH

EXPOSE 22

  • Build an image from this Dockerfile with docker build -t <image_name>:<version> <directory>. For example, to create an image named sshaccess with version latest from the Dockerfile in /root/test/, use docker build -t sshaccess:latest /root/test/.
  • Run a container from this image with docker run -itd -p "<host_port>:22" <image_name>. For example, to create a container with sshaccess image and map port 200 of the host to port 22 of the container, use docker run -itd -p "200:22" sshaccess.
  • Connect to the container from another machine using SSH with ssh -p <host_port> root@<public_IP>. For example, to connect to the container from a Windows machine using PuTTY, enter the public IP address of the host machine and port 200 in PuTTY. To connect to the container from a Linux machine using command prompt, use ssh -p 200 root@<public_IP>. Enter the root password (root123) when prompted.
  • You can now access the container as the root user and execute commands inside it.

To restrict the permission to a container like read-only access, you can use the --volumes-from option with docker run and specify the source container and the read-only mode. For example, follow these steps:

  • Create a new container with a volume that you want to sync with another container. For example, to create a container named WebServer with the httpd image and sync the data between /usr/local/apache2/logs on the host and /usr/local/apache2/logs on the container, use docker run -itd --name WebServer -p "100:80" -v "/usr/local/apache2/logs" httpd.
  • Create another container with read-only access to the volume of the source container. For example, to create a container with the sshaccess image and map port 200 of the host to port 22 of the container, and mount the volume from WebServer in read-only mode, use docker run -itd -p "200:22" --volumes-from WebServer:ro sshaccess.
  • Access the second container from another machine using SSH with ssh -p <host_port> root@<public_IP>. For example, to connect to the container from a Windows machine using PuTTY, enter the public IP address of the host machine and port 200 in PuTTY. To connect to the container from a Linux machine using command prompt, use ssh -p 200 root@<public_IP>. Enter the root password (root123) when prompted.
  • You can now access the second container as the root user, but you cannot modify the data in the volume that is mounted from the source container.

Note: In real life, you may want to use the same port number for accessing a container as the default port number for that service. For example, for httpd service, you may want to use port 80 instead of port 100. To do that, you can use -p "80:80" instead of -p "100:80" when creating the first container.

----LINK-----

By default, every container is isolated and does not interact with any other container.

------HOW TO LINK OR CONNECT CONTAINERS-------[IMPORTANT TOPIC]

First, launch a container with SQL image and name it DB:

docker run -itd -e MYSQL_ROOT_PASSWORD=DHINA123 --name DB mysql:5.5

The -e option sets the environment variable for the MySQL root password. Without this, the container will not launch.

Example:

root@ip-172-31-33-206:~# docker run -itd --name DB mysql:5.5 ERROR OCCURS

root@ip-172-31-33-206:~# docker run -itd -e MYSQL_ROOT_PASSWORD=DHINA123 --name DB mysql:5.5 7e02bcaf9a4f65987a9c6f2a7bee87aaa008c74606fa8a62ebd24201bf339452

root@ip-172-31-33-206:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7e02bcaf9a4f mysql:5.5 “docker-entrypoint.s…” 6 seconds ago Up 5 seconds 3306/tcp DB

Next, launch a container with WordPress image and name it MyWebServer. To link this container with the DB container, use the --link option:

docker run -itd --name MyWebServer --link DB:mysql -p “80:80” wordpress

The --link option creates a connection between the two containers and allows WordPress to access the MySQL database. The -p option maps the port 80 of the container to the port 80 of the host machine.

----SUMMARY-----

docker run -itd

-p (FOR PORT FORWARDING) –name (NAMING CONTAINER) -e (ENVIRONMENT LIKE MYSQL_ROOT_PASSWORD=DHINA123) -v (VOLUME CREATE) –mount (ATTACH VOLUME WITH CONTAINER) –volumes-from (CREATE CONTAINER WITH ALREADY CREATED CONTAINER NAME WITH VOLUME) –link (LINK OR CONNECT CONTAINERS (WORDPRESS CONTAINER TO SQL CONTAINER)) -t (TO TAG IMAGE FOR DOCKERFILE) –network (CHOOSE WHICH NETWORK THAT CONTAINER WILL USE LIKE BRIDGE OR HOST) imagename:version


Writing all these options for docker run command can be difficult or confusing.

           |------TO AVOID THOSE KIND OF CONFUSION, WE CAN USE YAML FILE----|

-----------HOW TO LINK OR CONNECT CONTAINERS (WORDPRESS CONTAINER TO SQL CONTAINER) THROUGH YAML FILE--------------

----DOCKER COMPOSE-----

NOTE: WRITING DOCKER FILE & DOCKER COMPOSE ARE ROLES AND RESPONSIBILITIES OF DEVOPS ENGINEER ONLY IF YOU ARE WORKING IN DOCKER

YAML FORMAT IS USED FOR COMPOSING

WE CAN WRITE ALL THE OPTIONS OF DOCKER COMMAND IN A YAML FILE. IF WE NEED, WE CAN EASILY USE IT.

INSTALL DOCKER COMPOSE IN LINUX MACHINE.

 

YAML FILE

The YAML file format is used for docker compose, which allows us to write all the options of docker command in a single file. If we need, we can easily use it.

docker-compose.yml (DOCKER COMPOSE FILE FORMAT)

version: “3” services: #global value database: image: mysql:5.7 volumes: - ./data:/var/lib/mysql environment: MYSQL_ROOT_PASSWORD: somewordpress MYSQL_DATABASE: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD: wordpress wordpress: image: wordpress depends_on: - database ports: - “8080:80” restart: always environment: WORDPRESS_DB_HOST: database:3306 WORDPRESS_DB_USER: wordpress WORDPRESS_DB_NAME: wordpress WORDPRESS_DB_PASSWORD: wordpress

Example:

root@ip-172-31-33-206:~# vi docker-compose.yaml

root@ip-172-31-33-206:~# cat docker-compose.yml

cat: docker-compose.yml: No such file or directory root@ip-172-31-33-206:~# cat docker-compose.yaml version: “3” services: #global value database: image: mysql:5.7 volumes: - ./data:/var/lib/mysql environment: MYSQL_ROOT_PASSWORD: somewordpress MYSQL_DATABASE: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD: wordpress wordpress: image: wordpress depends_on: - database ports: - “8080:80” restart: always environment: WORDPRESS_DB_HOST: database:3306 WORDPRESS_DB_USER: wordpress WORDPRESS_DB_NAME: wordpress WORDPRESS_DB_PASSWORD: wordpress root@ip-172-31-33-206:~#

root@ip-172-31-33-206:~# snap install docker (INSTALL DOCKER-COMPOSE COMMANDS) docker 20.10.24 from Canonical installed root@ip-172-31-33-206:~# docker-compose up -d (TO RUN YAML FILE) IT WILL CREATE CONTAINERS AND DATA WE PROVIDED IN YAML FILE

docker-compose ps (TO SHOW CONTAINERS USING DOCKER-COMPOSE)

SO, FINALLY WE LAUNCHED TWO CONTAINERS WITH LINKED USING DOCKER-COMPOSE.

 

-----DOCKER NETWORK------

root@ip-172-31-33-206:~# docker network ls NETWORK ID NAME DRIVER SCOPE 99dbfaab2bec bridge bridge local b51513edaba3 host host local 562ff9a09495 none null local e467761683f4 root_default bridge local

bridge



 ( TO CREATE ISOLATED CONTAINERS BY DEFAULT )(IF WE CREATE CONTAINERS THROUGH DOCKER RUN COMMAND, IT CHOOSES BRIDGE NETWORK FOR ISOLATION BY DEFAULT) host ( NO ISOLATION BETWEEN CONTAINERS )

docker network create --driver host Dhina ( TO CREATE OWN NETWORK )

root@ip-172-31-33-206:~# docker network rm d3 (TO DELETE NETWORK) d3 root@ip-172-31-33-206:~#



DOCKER NETWORKING TUTORIAL 

 https://docs.docker.com/network/network-tutorial-standalone/


-------TO LEARN DOCKER COMPOSE--------- 

DOCKER REFERENCE [ https://docs.docker.com/reference/

: https://docs.docker.com/network/network-tutorial-standalone/ : https://docs.docker.com/reference/

 

 


Popular posts from this blog

HOW TO BUILD LOGIN PAGE AND SIGN UP PAGE IN REACT JS AND AWS AMPLIFY

  STEPS TO BUILD LOGIN & SIGNUP PAGE IN REACTJS,NODEJS USING AWS AMPLIFY IN VS CODE SOFTWARE REQUIREMENTS NODE JS  AWS ACCOUNT VISUAL STUDIO CODE PLEASE BE FOLLOW THESE STEPS 1.      INSTALL  NODEJS   2.      CHECK VERSION OF NODEJS IN CMD - node --version 3.      CREATE AWS ACCOUNT(IF NOT HAVE OR TEMPORARY NEEDED,Dm me  BUT,ITS COST) 4.      INSTALL VISUAL STUDIO CODE ( install Extension-Live server,AWS Toolkit) 5.      RUN THESE COMMAND IN VS CODE TERMINAL-NPM UPDATE-npm install npm -g 6.      INSTALL - AWSTOOLKIT  EXTENSION IN VSCODE 7.      ENABLE SCRIPTING IN VS CODE - Set-ExecutionPolicy -Scope CurrentUser Unrestricted (paste these higlighted cmd in vs code Terminal) 8.      RUN THESE CMD IN TERMINAL- npx create-react-app projectname( Procedure ) 9.      RUN IN VSCODE TERMINAL(INSIDE PROJECTNAME DIRECTORY)- amplify configure (AWS ACCOUNT NECESSARY)TO PERFORM THESE CMD 10. RUN IN VSCODE TERMINAL(INSIDE PROJECTNAME DIRECTORY)- amplify init 11. RUN IN VS CO

AWS Toolkit Extension for Visual Studio Code (VS Code)

  AWS Toolkit Extension for Visual Studio Code (VS Code) is a plugin that enables developers to work with Amazon Web Services (AWS) services directly from within the VS Code editor. This extension provides several features and functionalities that can be used to develop, deploy, and debug applications on AWS. The AWS Toolkit Extension provides a set of tools and functionalities that make it easier for developers to build, test, and deploy serverless applications on AWS. It also provides an integrated development environment (IDE) for developing applications with AWS services, which includes support for AWS Lambda, AWS Step Functions, Amazon API Gateway, Amazon S3, and other AWS services. Some of the key features and functionalities of the AWS Toolkit Extension for VS Code include: Ø   AWS Explorer: A graphical user interface (GUI) that enables developers to view and manage their AWS resources from within VS Code. Developers can browse and navigate through their AWS accounts, c