Docker Introduction

                        Now a days where ever you see in Software development  you will listen one word that is Container !!!   Container !!!   Container !!!...….

  • In general if you ask me what is container, then I will say it is just an another form of server. So you just make an image in your mind that container is an another form of server.
  • Don't worry this article will help you to understand everything about container. So before going in to the container we will understand about the computer that we usually see in our homes and offices 
Understanding about Computer :-
  • So our end product here is computer, we will understand the steps involved in making an computer.
  • In generally to build an computer we need some hardware components and to manage this hardware need an software. So, we will list out all major hardware components.
      • Mother board with ( ram, processor)
      • Hard disk (to store the data)
      • NIC (network card to connect our computer to internet )
      • OS ( an operating system to manage all these hardware components )
    • Above are the four major components that we use to build an computer.
    -----------------------------------------------------------------------------------------------------------------------------
            By the above explanation you came to know  the components that help in building an computer 
    So this also applicable for the server also , the main difference in building an server is OS , the server operating system is slightly different from the desktop OS. other than this rest every hardware is same.

    So, in Product deployment we have 3 types of deployments
      • Traditional 
      • Virtualization
      • Containers
    • Traditional :-   In an traditional deployment we use monolithic server architecture that is for every web application we deploy we use one independent server ,in below image use see we have deployed 3 different web applications ,and we have assigned the separate hardware resource to each and every web application. In the image if you see you can notice that the wastage in usage of hardware resources. 
      And we see an heavy amount of money wastages i going on these hardware resources , and we cannot use these left over hardware in another application deployment.
      So we need to face many problems in maintaining these servers, any huge amount of money is also wasted so to avoid these we have came across another way of deployment technology that is the  Virtualization.
      Understand the below image before going in to the Virtualization.


    • Virtualization :- In this Virtualization architecture we have single hardware resource, we make partitions on that hardware using Virtualization software and we will deploy the applications.
      below image will help you to understand the Virtualization.


      • But in this concept we see that we are installing 2 OS's that is host and guest OS. The problems are for every OS we install he binaries and libraries will also be installed multiple times. So there will be an usage of hard disk space is wasted 
      • And in this Virtualization we need to allocate some amount of resource to each and every virtual server, but we don't know how much of it will be in usage, like if we give equal share of resource to all they virtual servers , but they use them based up on he load , if they have an heavy load  they will use every thing if they don't have heavy load they wont use.
      • So we cannot create another application with unused resources, so then we have moved to another technology that is Container.
    • Container :- So these are the latest deployment method we are following right now.

      • In the above image you clearly see that we don't have any type of resource wastage.
      • just think container as an virtual machine which we used to work in our Virtualization in the same way we work in this containers.
      • so coming to containers we have an software that we use to work with containers that is docker 

    Docker:-

    •  So now we discuss about the docker. docker is used to manage containers, so we need to manage the hardware resources to host the applications in the containers  
    • So  before understanding about docker we need to understand about the IMAGE, image is nothing but an package which consists of  files which are helps to install operating system with necessary drivers and applications.
    • In general we use windows.iso image file to install the windows os on our servers or personal laptops. 
    • We are working on an product, so developers as created one feature so they tested it internally , its worked from them, so they will pack that as an zip file and share that package with testing team for testing, while testing team working on that code they find the issues with application versioning or libraries missing or any new binaries need to added. Like developers team has given just the code that code they worked on there infrastructure ,but that infrastructure configuration they haven't shared with testing team so that testing team faced issues with that code.
    • So to avoid this issue we convert entire infrastructure configuration as an image file and share that image with testing team to check them.
    • Now you have understood the image and container differences. without an image we cannot run the container because without image container is just an hardware part we an os to run that container so that is an image

    Docker Architecture :-

    The architecture of Docker uses a client-server model and consists of the Docker’s Client, Docker Host, Network and Storage components, and the Docker Registry/Hub. Let’s look at each of these in some detail.
    Docker Daemon :-The Docker daemon (dockerd) is a background service that runs on the host machine. It is responsible for building, running, and managing Docker containers. The Docker daemon continuously listens for Docker API requests and performs various container-related tasks, such as image management, network management, and storage management. It manages the container lifecycle, handles resource allocation, and ensures containers are isolated from each other and the host system.

    Docker Engine REST API :- The Docker Engine REST API is an interface that allows external programs and tools to interact with the Docker daemon. It exposes a set of HTTP endpoints that can be used to manage and control Docker containers, images, networks, volumes, and other Docker objects. The API provides a standardized way to send requests and receive responses, enabling programmatic access to Docker functionality. By using the REST API, developers can integrate Docker into their own applications or build automation scripts and tools around Docker.

    Docker CLI :- The Docker Command-Line Interface (CLI) is a command-line tool that provides a user-friendly interface for interacting with Docker. It acts as a client to the Docker daemon, sending commands and receiving responses. The Docker CLI allows users to perform various operations, such as building Docker images, running containers, managing networks and volumes, inspecting container and image information, and interacting with Docker registries. It provides a simple and intuitive way to interact with Docker using commands and options, making it easier to work with Docker from the command line.

    Docker Container :-  A Docker container is a lightweight and isolated runtime environment that encapsulates an application and its dependencies. It is created from a Docker image and can be run, stopped, started, and managed independently. Containers provide consistency and portability, allowing applications to run reliably across different environments without worrying about dependencies or configurations.

    Docker Image :- A Docker image is a read-only template that contains everything needed to run a container. It includes the application code, runtime, system tools, libraries, and dependencies. Images are built using a Dockerfile, which specifies the instructions for creating the image. Docker images can be shared, versioned, and distributed across different systems using Docker registries.

    Docker Networks :- Docker provides networking capabilities to enable communication between containers and external systems. Docker networks allow containers to connect to each other, share information, and communicate using IP addresses or container names. Networks can be created and configured using Docker commands or Docker Compose. Docker networks provide isolation, security, and flexibility for containerized applications.

    Docker Data Volumes :- Docker volumes are used to persist data generated by containers or share data between containers. Volumes provide a way to store and retrieve data independent of the container's lifecycle. Docker volumes can be mounted inside containers, enabling data to be saved outside the container's file system. Volumes can be managed by Docker or external storage systems, and they allow data to be preserved even if the container is stopped or removed.

    In summary, Docker Daemon manages Docker containers and related tasks, Docker Engine REST API enables programmatic interaction with the Docker daemon, Docker CLI provides a user-friendly interface for command-line interaction with Docker, Docker containers encapsulate applications and their dependencies, Docker images are the templates for creating containers, Docker networks enable container communication, and Docker volumes allow data persistence and sharing within containers.
    • By the above explanation if you didn't get here is the quick summary. As i said that container is an another form of server right so it need to have an network to communicate to another server that is docker network and it need to have an  hard disk to store its data that is docker volumes and the hardware configuration is managed by namespaces and cgroups.
    Docker Install :- 

    https://docs.docker.com/engine/install/

    Go to the above website and install docker based up on you operating system
    and execute the below command, these are just basic commands , after executing these go for advance commands.

    => docker image / docker images ls {list of images}
    => docker container run 
     => docker container run <image name>
     => docker container run --name <new container name> <image name>
     => docker container run -i --name <new container name> <image name>
     => docker container run -it --name <new container name> <image name>
     => docker container run -dit --name <new container name> <image name>
     => docker container run -d -p <port number 8000||80> <image name>
     => docker container run -dit -p <port number 8000:80> --name <new container name> <image name>
     => docker container run -dit -p <port number 8000:80> --name <new container name> -v <host folder name >:<sontainer location path> <image name>
                               {docker cotainer run -dit -p 8080:80 --name sampat_cont -v sampath_volume:/usr ubuntu}

    =>docker start <image name>
    =>docker stop <image name>
    =>docker rm <image name>
    =>docker container prune
    =>docker image prune / docker rmi <your-image-id>
    =>docker exec -it <container id / name> <command to run inside a container (bash)> => {docker exec -it sampath bash}
    =>docker container inspect "<containerid>" {docker inspect webserver_nginx_master | grep "IPAddress"}
    =>docker logs <conatiner id>
    =>docker history "image id"
    =>docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]  {docker tag webserver02_ngnix:latest webserver:V1.0}

    To convert a Docker container into a Docker image:-
    ---------------------------------------------------
    =>docker stop <container name / id>

    =>docker commit
    => docker commit <container name> < new image name>                       {docker commit webserver01 sampath_webserver01}
    => docker commit <container name> < new image name>:<tag name(version)>   {docker commit webserver01 sampath_webserver01:V2.0}

    =>docker image / docker images ls {list of images}

    docker push
    ------------
    =>docker tag <local_image_name>:<local_image_tag> <docker_hub_username>/<repository_name>:<image_tag>
    =>docker push <docker_hub_username>/<repository_name>:<image_tag>


    docker volumes
    --------------
    => docker volumes ls
    => docker volume create <volume name>
    => docker volume inspect <volume name>
    => docker container run -dit --name <new container name> -p <port number 8000:80> --mount source=<volume name>,destination=<container location path> <image name>
    =>docker container run -dit -p <port number 8000:80> --name <new container name> -v <host folder name >:<container location path> <image name>
    => docker container run -dit -p 8080:80 --name sampat_cont --mount source=sampath_volume,destination=/data ubuntu
    => docker container run -dit -p 8080:80 --name sampat_cont -v sampath_volume:/usr ubuntu
    =>docker container run -dit -p <port number 8000:80> --name <new container name> -v <host folder name >:<container location path> <image name>
    ----------------------------------------------------------------------------------------
    with the same volume we can create many containers, below is the command to share volume 
    ----------------------------------------------------------------------------------------
    => docker container run -dit --volume-from server1 --name server2 nginx /bin/bash
    => docker container run -dit -p 8080:80 --name sampat_cont --mount source=sampath_volume,destination=/data ubuntu
    => docker container run -dit -p 8080:80 --name nginx_container --mount type=bind,source=/var/lib/docker/azureblob,target=/azureblob nginx




    This article will help to understand about docker after this try different commands you can find them in google and in YouTube.

    Comments

    Popular Posts