Docker
Intro
- Docker is an open source containerization tool, that lets you run your application in an isolated environent called containers.
- Containers included:
- code
- Configs
- Runtime
- Environment
- Libraries
- Docker installed on top of O/S, images for the applications are created with the help of Dockerfile and applications willl be deployed on containers using these images. This images have all required softwares & tools as a package.
- Image can be uploaded or downloaded from docker hub.
- How docker works?
- Example: secretsanta
- The secretsanta is a java based application, to run the this application, it requires the following environment.
- Operating System:
- Dependencies:
- convert code into artifact:
- Application run on port:
- Package: To create package from the code: maven( package to build artifact(.jar))
- Deploy package: run command java-jar secretsanta-0.0.1-SNAPSHOT.jar or to run on different port java -Dserver.port=8999 -jar secretsanta-0.0.1-SNAPSHOT.jar.
- Access application:
- The above application will only run when the above environment is met. It can not run on any other environment.
- If the application need to run on different environemnt like windows machine than it will fail. we need to create a computer or VM with the windows o/s and install all dependencies.
- Docker has provided the solution to run the application on different environemnt using light weight images of o/s and dependencies.
- Docker will create the image using Dockerfile in which define all these requirements/dependecies like o/s, jdk, port and command java-jar app.jar
- Updload this image to docker hub where customer will download and run the image. When it runs it will create a container and install all required dependencies defined in dockerfile while creating image.
- check project section for all the steps.
- Containers are fast to create and used less hardware resources as it uses light wight applications.
- Application can migrate from one o/s to another o/s by simply copying application. suppose an application (oracle) is installed on windows and want to migrate to my SQL on linux which can be done by simply copying docker container (oracle) to docker mySQL on linux.
- code
Docker Installation: it comes in 2 flavours, community edition (free) & enterprise edition.
Docker on Ubuntu
- Method1: Create a Ubuntu VM
- Check ubuntu version:
- Check if docker is installed:
- $docker --version
- $docker info
- Docker commands run with root privileges: Get root access
- $sudo su - (root user access)
- $sudo apt install docker.io (Install docker from local repository)
- Error: E: Package 'docker.io' has no installation candidate: ubuntu package was not udated.
- $apt-get update
- $sudo apt install docker.io (Install docker from local repository)
- Run hello-world to check docker installation:
- $docker pull hello-world (only root user can access docker, assign permission to user)

- When you install docker a group is created with the name docker, whoever added to this group can run the docker commands, by default root user is added.
- When you create a user a group is also created with the same name of user, this group is primary group to this user. you can add additional group to this user
- Add user (abdul) to docker group:
- $sudo usermod -aG docker abdul (-a is used to assign as a secondary group, -A is use for primary group, G is used for group, docker is group, abdul is user), log off and log on. or run $newgrp docker (it will add user to docker)
- Start docker Service:
- $systemctl start docker or
- $sudo service docker start
- $systemctl enable docker
- $docker info
- Check docker status:
- $sudo docker service status
- Method 2: Docker Installation through script:
- Create Ubuntu instance (t2 micro)
- $sudo su - (root user access)
- $curl -fsSL https://get.docker.com -o get-docker.sh
- $ls
- $sh get-docker.sh
- $docker --version
- Check docker status:
- $sudo docker service status
- Method 3: Install Docker from Official Repository or using Default Repository
- Create a ubuntu VM on cloud(AWS)
- Install Docker from Official Repository
- Step1: Updating the software Repository:
- Step 2: Downloading Dependencies:
-
Step 3: Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
- Step 4: Installing the Docker Repository:
- #sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- Step5L Enable Docker Service:
- $sudo systemctl start docker
- $sudo systemctl enable docker
- Step 6: Verify that the installation is successful by running the
hello-world
image:
- $docker --version
- $docker info
- #sudo docker run hello-world
- Add user permission to execute docker comamnd (it is a normal user):
- $sudo usermod -aG docker username
- $newgrp docker (changes take affect immediately or restart machine)
- Removing Docker:
- $sudo apt-get remove docker docker-engine docker.io
- Check docker status:
- $sudo docker service status
- Installing Docker from Default Repositories:
- Step1: Updating the local repository:
- Step2: Installing Docker
- $sudo apt install docker.io
- Step3: Check Docker Installation
- $docker --version
- $docker info
- Check docker status:
- $sudo docker service status
Docker Desktop
- Windows:
- Download Docker Desktop Installer .exe (https://docs.docker.com/docker-for-windows/install/)
- Run Docker Desktop Installer.exe
- Default path C:\Program Files\Docker\Docker
- If your administrator account is different to your user account, you must add the user to the docker-users group:
- Run Computer Management as an administrator.
- Navigate to Local Users and Groups > Groups > docker-users.
- Right-click to add the user to the group.
- Sign out and sign back in for the changes to take effect.
- Install from Command Line:
- "Docker Desktop Installer.exe" install
- If you’re using PowerShell you should run it as:
- Start-Process 'Docker Desktop Installer.exe' -Wait install
- If using the Windows Command Prompt:
- start /w "" "Docker Desktop Installer.exe" install
- ubuntu:
- Install Docker Desktop on Ubuntu.
- Download DEB package.
- Install gnome terminal:
- $sudo apt install gnome-terminal
- Uninstall the tech preview or beta version of Docker Desktop for linux:
- $sudo apt remove docker-desktop
- Install the package using apt repository:
- Change the directory to Download as DEB package is there.
- $cd Download
- root@master:/home/abdul/Downloads# sudo apt-get install ./docker-desktop-amd64.deb
- By default, Docker Desktop is installed at /opt/docker-desktop
- Launch Docker Desktop.
- Navigate to the Docker Desktop application in your Gnome/KDE Desktop. (click show all application and look docker desktop and click)
- Select Docker Desktop to start Docker. The Docker Subscription Service Agreement displays.
- Select Accept to continue. Docker Desktop starts after you accept the terms.
- Note that Docker Desktop won't run if you do not agree to the terms. You can choose to accept the terms at a later date by opening Docker Desktop.
- Alternatively, open a terminal and run:
- $systemctl --user start docker-desktop
CentOS
- $docker --version
- $sudo yum check-update
- $yum install docker* -y
- $systemctl start docker
- $systemctl enable docker
- $docker info
- Check docker status:
- $sudo docker service status
Remove Docker Desktop
- Ubuntu:
- $sudo apt remove docker-desktop
- This removes the Docker Desktop package itself but doesn’t delete all of its files or settings.
- Manually remove leftover file.
- $rm -r $HOME/.docker/desktop
- $sudo rm /usr/local/bin/com.docker.cli
- $sudo apt purge docker-desktop
- This removes configuration and data files at $HOME/.docker/desktop, the symlink at /usr/local/bin/com.docker.cli, and purges the remaining systemd service files.
- Windows:
- From GUI:
- From the Windows Start menu, select Settings > Apps > Apps & features.
- Select Docker Desktop from the Apps & features list and then select Uninstall.
- Select Uninstall to confirm your selection.
- From the CLI:
- Locate the installer C:\Program Files\Docker\Docker\Docker Desktop Installer.exe
- Uninstall Docker Desktop:
- in powershell: $Start-Process 'Docker Desktop Installer.exe' -Wait uninstall
- In the command prompt: start /w "" "Docker Desktop Installer.exe" uninstallMac
- After uninstalling Docker Desktop, some residual files may remain which you can remove manually. These are:
- C:\ProgramData\Docker
C:\ProgramData\DockerDesktop
C:\Program Files\Docker
C:\Users\<your user name>\AppData\Local\Docker
C:\Users\<your user name>\AppData\Roaming\Docker
C:\Users\<your user name>\AppData\Roaming\Docker Desktop
C:\Users\<your user name>\.docker
- Debian
- $sudo apt remove docker-desktop
- Manually remove leftover file.
- $rm -r $HOME/.docker/desktop
$sudo rm /usr/local/bin/com.docker.cli
$sudo apt purge docker-desktop
- Mac:
- From the GUI:
- Open Docker Desktop.
- In the top-right corner of the Docker Desktop Dashboard, select the Troubleshoot icon.
- Select Uninstall.
- When prompted, confirm by selecting Uninstall again.
- From the CLI:
- /Applications/Docker.app/Contents/MacOS/uninstall
- After uninstalling Docker Desktop, some residual files may remain which you can remove:
- rm -rf ~/Library/Group\ Containers/group.com.docker
rm -rf ~/.docker
- With Docker Desktop version 4.36 and earlier, the following files may also be left on the file system. You can remove these with administrative privileges:
- /Library/PrivilegedHelperTools/com.docker.vmnetd
/Library/PrivilegedHelperTools/com.docker.socket
User Permission
- In Linux System:
- When you install docker a group is created with the name docker, whoever added to this group can run the docker commands, by default root user is added.
- $cat /etc/group : it will list all groups
- When you create a user a group is also created with the same name of user, this group is primary group to this user. you can add additional group to this user
- $sudo usermod -aG docker abdul (-a is used to assign as a secondary group, -A is use for primary group, G is used for group, docker is group, abdul is user), log off and log on. or run $newgrp docker (it will add user to docker)
- Create a user and set password and PasswordAuthentication set to yes
- Create a group #sudo groupadd docker
- add user to the group #sudo usermod -aG docker username
- or
- $sudo chmod 666 /var/run/docker.sock (all the users of the computer will have access to run docker commands)
- $sudo systemctl restart docker
- In Windows
- If your administrator account is different to your user account, you must add the user to the docker-users group:
- Run Computer Management as an administrator.
- Navigate to Local Users and Groups > Groups > docker-users.
- Right-click to add the user to the group.
- Sign out and sign back in for the changes to take effect.
- code
Troubleshooting
- Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
- root@master:~# systemctl start docker
- Failed to start docker.service: Unit docker.service not found.
- $sudo apt-get install docker-ce
- $systemctl start docker
- code
- code
Node JS on Ubuntu
- Method 1: Using snap package manager (it will be easy to upgrade/downgrade versions)
- In google search snap nodejs and click the link
- click on install button on top right corner.
- copy the command:
- $sudo snap install node --classic (select version to install and copy the command)
- $sudo snap install node --channel=23/stable --classic
- $node --version
- $npm --version
- To upgrade/downgrade, click drop down button and select version, copy command and run in ubuntu
- Instead of install you type refresh.
- $sudo snap refresh node --channel=21/stable --classic (downgrading to 21)
- $sudo snap refresh node --classic (latest version will install)
- Method 2: go to node js official page: nvm (node version manager)
- click download and select the required version and o/s using nvm
- click copy to clipboard and paste it in ubuntu and install
Docker Container, Images, Host, Client, Daemon, volume
Container
- Docker image installed in a container to access. you can create n number of containers from image.
- After required customization and configuration of container, you can create custom image.
- $docker container --help
- $docker container attach
- attach Attach local standard input, output, and error streams to a running container
- commit Create a new image from a container's changes
- cp Copy files/folders between a container and the local filesystem
- create Create a new container
- diff Inspect changes to files or directories on a container's filesystem
- exec Execute a command in a running container
- export Export a container's filesystem as a tar archive
- inspect Display detailed information on one or more containers
- kill Kill one or more running containers
- logs Fetch the logs of a container
- ls List running containers
- ls -a List stoped, exit containers
- pause Pause all processes within one or more containers
- port List port mappings or a specific mapping for the container
- prune Remove all stopped containers
- rename Rename a container
- restart Restart one or more containers
- rm Remove one or more containers
- run Create and run a new container from an image
- start Start one or more stopped containers
- stats Display a live stream of container(s) resource usage statistics
- stop Stop one or more running containers
- top Display the running processes of a container
- unpause Unpause all processes within one or more containers
- update Update configuration of one or more containers
- wait Block until one or more containers stop, then print their exit codes
- Commands
Create Container |
$docker run -d --name xyz -p 9000:9000 image_name:tag |
create container using image in detach mode, using port 9000 with define version, two ports defined, first port is container port and second port is host port, container is installed within host docker. |
options used with run command |
|
--name |
give a name to container |
-it |
open interactive terminal in the container |
-d |
running container in detached/background mode |
-e |
pass an environment variable |
tag |
version of image |
-v |
attach an external directory or device as a volume |
--volume-from |
sharing volumes between containers |
-rm |
delete a container on exit |
-p |
port mapping , container port with host port, ex -p 8080:80 (8080 is host port) |
-P |
capital P, used for automatic port mapping, container port mapped with host port greater than 30000 |
--link |
linking of containers |
ctrl+p, ctrl+q |
To come out of a running container without exit |
|
|
List Containers |
|
$docker container ls or docker ps |
list of running container |
$docker container ls -a or ps -a |
List of all containers including running and stopped container |
docker container inspect cont_name / cont_id |
To get detailed information about container |
|
|
Start / Stop restart Container |
|
docker container start cont_name / cont_id |
To start a stopped container |
docker container stop cont_name / cont_id |
To stop a started container |
docker container stop $(docker ps -aq) |
To stop all running containers |
docker container restart cont_name / cont_id |
To restart a container |
docker container restart -t 10 cont_name / cont_id |
To restart after 10 seconds |
|
|
Delete Container |
|
docker container rm cont_name / cont_id |
To delete a stopped container |
docker container rm -f cont_name / cont_id |
To delete a running container forcefully |
docker container rm $(docker ps -aq) |
To delete all stopped containers |
docker container rm -f $(docker ps -aq) |
To delete all running and stopped containers |
|
|
Logs |
|
docker logs cont_name / cont_id |
To get logs of a container |
View open Ports |
|
docker port cont_name / cont_id |
To view ports open on a container |
|
|
Port Mapping |
|
#docker container run -it -p 3600:80 ubuntu /bin/bash |
It will create container with port 3600 mapped |
|
|
|
|
|
|
- List of containers:
- $docker container ls (list of running containers)
- $docker container ls -a (list of all containers running & stopped)
- $docker ps (list of all containers running & stopped)
- Delete containers:
- $docker container rm container_ID (it will delete stopped container)
- $docker container rm -f container_ID (it will delete container forcefully, if running)
- $docker rm -f $(docker ps -aq) delete all container running/stopped in quite mode.
- Create container:
- $docker container run ubuntu
- It will create container but exited the terminal after creation.
- $docker container run ubuntu sleep 60
- It will create container and occupy the terminal for 60 seconds, you cannot run any commands on this terminal.
- open another terminal and run $watch docker container ls
- $docker container run -d ubuntu
- It will run in detach mode.
- $docker contaienr run -it ubuntu
- It will run in interactive mode, means ubuntu will be running continuously in the background.
- $docker container run -it ubuntu /bin/bash
- It will run in interactive mode, enter into ubuntu terminal.
- $docker container run -it -name webserver1 ubuntu /bin/bash
- it will give container name webserver1
- Stop container:
- $docker container stop container_ID
- Stop all running containers:
- Start container:
- $docker container start container_ID
- Restart container:
- $docker container restart container_ID
- Restart container after 10 seconds
- $docker restart -t 10 cont_name / cont_id
- Come out of container:
- if you run exit you will come out of container terminal but container will get stop.
- cntrl p than ctrl q
- Get into container:
- root@ubuntu-agent:~#docker container attach container_ID/name
- root@ubuntu-agent:~# docker exec -it 0fea384b2d41 /bin/bash
- Remove container: running container will not remove, stop first then remove
- $docker container rm container_ID
- $docker container rm container_ID container_ID
- Remove all unused containers, volumes, images
- $docker system prune -a --volumes -f
- Remove container forecefully: running container will also be removed
- $docker container rm -f container_ID
- Define hostname, DNS while creating container:
- $docker container run -it --hostname webserver --dns 8.8.8.8 ubuntu:14.04
- $hostname
- $ifconfig
- $cat /etc/resolv.conf
- Port Mapping:
- When you install docker on host VM is called docker host, a docker ethernet (docker0) is also created on docker host and assign a IP address.

- Ip's to containers are assigned from this pool. Application running on container/IP can be access from docker host only.
- To access the application from outside the docker host configure port mapping.
- Perform the port mapping with docker host main ethernet (ens33) ip address (192.168.171.100), Internet is working on this host.
- Assign port while creating container: ubuntu container is creating and deploying apache2
- $root@ubuntu-agent:~# docker container run -it -p 3600:80 ubuntu /bin/bash
- root@0fea384b2d41:/# apt-get update (you are in ubuntu container terminal with /bin/bash)
- root@0fea384b2d41:/# apt-get install apache2
- root@0fea384b2d41:/# service start httpd
- root@0fea384b2d41:/# cd /etc/www/html
- root@0fea384b2d41:/var/www/html# echo "Welcome to star Distributors / Port Mapping " > index.html
- root@0fea384b2d41:/var/www/html# service apache2 start

- Access the application from outside:
- $root@ubuntu-agent:~#ifconfig
- copy IP address of host (ens33), in real time will take public IP address.
- open browser: http://192.168.171.100:3600
- Rename:
- $docker container rename container_ID newname
- root@ubuntu-agent:~# docker container rename 0fea384b2d41 apache-server1

- Get into container:
- root@ubuntu-agent:~#docker container attach container_ID/name
- root@ubuntu-agent:~# docker exec -it 0fea384b2d41 /bin/bash
- Copy: copy file to a container
- create a file
- $root@ubuntu-agent:~#echo "some text" > data1
- $root@ubuntu-agent:~#docker container cp data1 container_ID:/tmp/ (copy file in a container's /tmp/ folder)
- Check in container's /tmp/ folder
- Kill: Forcefully stop the container.
- root@ubuntu-agent:~#docker conainer kill container_ID/name
- Wait: Block until one or more containers stop, then print their exit codes
- root@ubuntu-agent:~#docker container wait container_ID/name
- Pause / unpase:
- root@ubuntu-agent:~#docker container pause container_ID/name
- root@ubuntu-agent:~#docker conainer ls
- root@ubuntu-agent:~#docker container unpause container_ID/name
- Prune:
- $docker contianer prune (Remove all stopped containers)
- Export:
- Exporting container which creates .tar file
- root@ubuntu-agent:~# docker contaiener export 0fea384b2d41 > abc1.tar (instead of > you can use -o )
- root@ubuntu-agent:~# docker contaiener export 0fea384b2d41 -o abc2.tar
- root@ubuntu-agent:~#ls (abc1.tar and abc3.tar created).

- Create image by importing the .tar file and deploy.
- root@ubuntu-agent:~#docker image import abc1.tar starimage1 (image with the name starimage1 will be created)

- root@ubuntu-agent:~# docker image ls
- root@ubuntu-agent:~# docker container run -it starimage1 /bin/bash

- Commit:
- Creating image of running container:
- root@ubuntu-agent:~#docker container commit container_ID image_name
- Limit memory and cpu to be used by a contianer:
- root@ubuntu-agent:~#root@ubuntu-agent:~# docker run -it --memory="512m" --cpus="1.5" ubuntu:1 (image ubuntu:1 is located at current path)
- root@ubuntu-agent:~# docker stats container_ID (to check memory utilization)
- Check utilization:
- root@ubuntu-agent:~# docker stats --no-stream
- Create a container with a specific user, using user ID
- root@ubuntu-agent:~#docker run -u 1001:1001 ubuntu:1
- Read only container, no modification is allowed inside container:
- root@ubuntu-agent:~# docker container run --read-only -it -d -p 5000:80 ubuntu:1 (container is read only).
- root@ubuntu-agent:~# docker exec -it be495dd5a482 /bin/bash
- root@be495dd5a482:/# mkdir abc
- mkdir: cannot create directory 'abc': Read-only file system
- Give permission on /tmp folder to make changes:
- root@ubuntu-agent:~# docker container run --read-only -v /tmp --tmpfs /tmp -it -d -p 5000:80 ubuntu:1
- root@ubuntu-agent:~# docker exec -it ca621efbd40b /bin/bash
- root@ca621efbd40b:/# cd /tmp
- root@ca621efbd40b:/tmp# mkdir test (it creaes test folder)
- Find out how many times countainer restart:
- root@ubuntu-agent:~# docker inspect ca621efbd40b | grep -i restartcount
- "RestartCount": 0,
- Health Check:
- Check logs from host machine without going inside container:
- create a folder:
- root@ubuntu-agent:~# mkdir logs
- Create a nginx container and point to logs folder of ngnix logs
- root@ubuntu-agent:~# docker run -v $(pwd)/logs:/var/log/nginx nginx
- root@ubuntu-agent:~#ls (two log files has been created for the nginx container)
- Tail logs of the container:
- Tail last 20 lines of the log of the container:
- root@ubuntu-agent:~# docker logs -f --tail 2 ca621efbd40b
- Find out a specific process is running inside container:
- root@ubuntu-agent:~# docker exec -it ca621efbd40b sh -c "ps aux |grep java" (check java is running inside container)
- Container with restriction max process = 100, read only mode, memory = 256m
- root@ubuntu-agent:~# docker run -it --pids-limit 100 --memory=256m --read-only ubuntu:1 /bin/bash
- Try to create a file (error: read only file system)
Images
- Intro:
- It is a combination of binaries and libraries which are required for an application/software to run. Docker has packaged important/required binaries and libraries of software and called it a docker image. You install that image in a container.
- $docker images (list of images in local repository )
- $$docker pull sonarqube:lts-community (imagename:tag, where tag is used for version, it will check the local repository and if not found then go to docker repository)
- location of image on docker host: /var/lib/docker/image/overlay2/imagedb/content/sha256/
-
Image |
|
$docker image --help |
help on image command |
$docker pull Image_name:tag |
Download a docker image |
$docker search image_name |
search a docker image |
$docker list images or $docker image ls |
List of all docker images |
$docker image ls -a |
List all images with space used |
$docker image rm image_ID |
Delete image |
$docker image rm -f image_ID |
Delete image forcefully |
$docker push image_name or $docker push image_Id |
Upload docker image |
$docker rmi image_name |
Delete a single docker image |
$docker system prune -a |
Delete all images |
$docker commit container_name/container_id image_name |
To create a docker image from container |
$docker build -t image_name . |
To create a docker image from dockerfile |
$docker inspect image_name |
get detailed information of image |
$docker image save image_nae tarfile_name |
To save an image as tar file |
$docker image load tarfile |
to extract an image from tar file |
$docker container commit container_ID image_name |
Create image with the running container. |
- Creating a custom image of ubuntu with docker file.
- Pull ubuntu base image:
- root@ubuntu-agent:~# docker image pull ubuntu:14.04
- root@ubuntu-agent:~# docker image ls
- root@ubuntu-agent:~# docker image history 13b66b487594
- docker_file1.jpg
- create a file $vim Dockerfile (if you use different name than at the time of deployment you use -f filename)
- Build Image 1:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:1 . (t is for tag, . dot means file at current path)
- dot has been given at the end to define Dockerfile is at current path, if used custom name than we should define -f filename
- custom ubuntu image with the name star_ubuntu:1 has been created.
- docker_file2.jpg
- both IMAGE ID are same as we have not made any changes in building custom image so there is no additional layer in the image.
- Create a file test123 in the new custom image of ubuntu.
- Edit Dockerfile:
- root@ubuntu-agent:~# vim Dockerfile
- FROM ubuntu:14.04
RUN touch test123
- save & exit
- Build an image 2:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:2 .
- root@ubuntu-agent:~# docker image history 3e46222c9568
- Image ID has been changed and if run history there is one layer added /bin/bash -C touch test123, remaining 5 layers are same as there in base image.
- docker_file3.jpg
- Deploy custom image:
- root@ubuntu-agent:~# docker container run -it --name myserver star_ubuntu:2
- docker_file4.jpg
- Run apt-get update, apt-get install apache2, tree, openssh-server & client, change directory to /html and change index.html, service apache 2 start.
- Edit Dockerfile:
- root@ubuntu-agent:~# vim Dockerfile
- FROM ubuntu:14.04
RUN touch test123
RUN apt-get update && apt-get install -y apache2 # update and install apache2
RUN apt-get install -y tree openssh-server openssh-client #install tree and openssh
RUN cd /var/www/html #change directory
RUN echo "Welcome to Star Distributors " > /var/www/html/index.html #change index.html file
RUN service apache2 start # start apache2 service
- save & exit
- Build an image 3:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:3 .
- Deploy using star_ubuntu:3
- root@ubuntu-agent:~# docker container run -it --name webserver1 star_ubuntu:3 .ls
- Create an image with tag name as per the git commit, after commit to github than commit name should be taken as tag while creating image.
- Boardgame repository is taken for this lab,
- install java and maven and Build the image.
- $docker build -t boardgame:$(git rev-parse --short HEAD) .
- Create an image with tag name as per the date
- root@ubuntu-agent:~# docker build -t ubuntu:release-$(date +%Y-%m-%d) .
- root@ubuntu-agent:~#docker image ls
- REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu release-2025-05-13 7d0907079ba5 15 seconds ago 211MB
- Remove dangling images (imges with <none> name <none> tag
- root@ubuntu-agent:~# docker rmi $(docker images -f "dangling=true" -q)
- Find out how many images created after a particular image:
- docker image ls -f "since=ubuntu:1"
- Multi Architect/Platform Images:
- docker builder is not installed, If you have installed docker desktop than builder is installed.
- $docker builder ls

- Install docker buildx:
- $sudo apt install docker-buildx (it is an extension to buildkit)

- if you check all supported linux/amd64 architecture, to support arm64 architecture.
- $docker buildx create --name multiarch --platform linux/amd64,linux/arm64 --driver docker-container --bootstrap --use
- $docker builder ls

- now it support arm64, amd64
- create an image:
- $vim Dockerfile
- FROM ubuntu
CMD ["echo", ubuntu image with multi platform"]
- $docker buildx build --platform linux/amd64,linux/arm64 -t ubuntu_multiplatform:v1 .

- $docker buildx build --platform linux/amd64,linux/arm64 --push -t ubuntu_multiplatform:v1 .
- Error: failed to solve: failed to push ubuntu_multiplatfom:v1: server message: insufficient_scope: authrization failed
- it does not store in local docker, it has to be in docker hub
- $docker buildx build --platform linux/amd64,linux/arm64 --push -t aziz27uk/ubuntu_multiplatform:v1 .
- Go to docker hub and check the image:

- Test: Create two VMs (amd64 & arm64)
- deploy on both vms
- $docker run aziz27uk/ubuntu_multiplatform:v1
-

Host
- The host o/s on which docker is installed is called docker host.
- code
Client
- Terminal which is used to access docker is called docker client, when you install docker a docker client is also installed and it runs in the background, which is responsible for taking commands and pass it to another background process called daemon.
- code
- code
Code
Daemon
- Daemon will analyse the type of command and route it to the following.
- Docker Images:
- Docker Containers:
- Docker Registry: it is where docker images are saved.
- public registry: Maintained by hub.docker.com
- private registry: it is private in nature
- code
Volume Storage

- Docker volumes are a way to persist data generated or used by Docker containers. They provide a means to store and manage data separately from the container itself, ensuring that data persists even if the container is stopped or removed. Docker volumes are commonly used for scenarios where you need to share data between containers or when you want to keep data separate from the container's file system
- We install a O/S on VM, install docker package on it so it will called docker host.
- Containers are created on docker host and any data stored in containers are volatile, get deleted when container restart.
- We can create storage on docker host and shared with containers but there is risk of docker host get deleted or corrupt so data may lost.
- The best solution is to create a SAN storage or cloud storage and mount with docker host and share the storage with containers.
- Persistent Data: Docker containers are typically ephemeral, meaning their file systems are isolated and any data generated within a container is lost when the container is removed. Volumes provide a way to store data outside of containers, ensuring that it persists across container lifecycle events.
- Volume types:
- volume8.jpg
- Host-Mounted Volumes:
- Host-mounted volumes allow you to specify a directory from the host machine that is mounted into the container. This can be useful when you want to share data between the host and container.
- docker run -v /path/on/host:/path/in/container myapp
- Example: Mount the
/var/data
directory on the host machine to the /data
directory in the container.
- docker run -v /var/data:/data myapp
- Anonymous Volumes:
- Anonymous volumes are created automatically by Docker and are managed for you. They are typically used when you don't need to manage the volume explicitly, such as for temporary or cache data.
- docker run -v /path/in/container myapp
- Example: Create an anonymous volume for a PostgreSQL database container.
- docker run -v /var/lib/postgresql/data postgres
- Named Volumes:
- Named volumes are explicitly created and given a name, making it easier to manage and share data between containers. They are useful for maintaining data between container restarts and for sharing data between multiple containers.
- docker volume create mydata
docker run -v mydata:/path/in/container myapp
- Example: Create a named volume called
mydata
and use it to persist data for a web application container.
- docker volume create mydata
docker run -v mydata:/app/data myapp
- Volume Management: You can create, list, inspect, and remove volumes using Docker CLI commands like docker volume create, docker volume ls, docker volume inspect, and docker volume rm.
- $docker container inspect container_ID (check volume is attached)
- $docker volume ls
- Create a volume on docker host and shared with container:
- $docker volume --help
- create create a volume
- inspect Display detailed information on one or more volumes
- ls list volumes
- prune remove all unused local volumes
- rm remove one or more volumes
- $docker volume create volume1
- $docker volume ls
- $docker volume inspect volume1

- Attach / mount volume to container:
- creating a container using image from docker hub aziz27uk/ubuntu_apache, attaching a volume at /tmp folder
- root@ubuntu-agent:~# docker container run -it -v volume1:/tmp --name webserver aziz27uk/ubuntu_apache /bin/bash
- now to /tmp folder and create some files and folders which you can also view in docker host as volume is created on docker host and attached to container's /tmp folder

- Go to docker host location /var/lib/docker/volumes/volume1/_data and check files

- Delete the container and create a new container and attached the volume
- root@ubuntu-agent:~# docker container run -it -v volume1:/tmp --name webserver2 aziz27uk/ubuntu_apache /bin/bash
- Go to /tmp folder and check the data which was stored in this volume is available.

- add new files touch abc11 abc12 abc13
- Attach the same volume to a new container
- root@ubuntu-agent:~# docker container run -it -v volume1:/tmp --name webserver2 aziz27uk/ubuntu_apache /bin/bash
- This user will also get the files abc11 abc12 abc13 as the same volume is attached with both containers.

- Create a mysql container and attach volume
- Pull docker image mysql
- root@ubuntu-agent:~# docker image pull mysql
- root@ubuntu-agent:~#docker image ls
- root@ubuntu-agent:~#docker image inspect mysql
- locate the volume mount path mentioned in the image,
- when you deploy mysql container volume path is set to /var/lib/mysql

- Deploy mysql:
- root@ubuntu-agent:~# docker container run -it --name sql-server -e MYSQL_ALLOW_EMPTY_PASSWORD=true mysql /bin/bash
- deploying mysql with the name sql-server and set the environment variable MYSQL_ALLOW_EMPTY_PASSWORD
- bash-5.1# mysql (type mysql and enter)

- mysql>show databases;
- mysql>create database finance;
- mysql>show databases;
- exit sql and check volume

- there are 2 volumes, one is volume1 created on docker host and other sql volume.
- To check volume for sql run
- root@ubuntu-agent:~#docker volume inspect b0804229a825075de91262b5190f33d05654825ce4b961be496f481f24509200

- check volumes

- If you delete container but volume will remain available at docker host location.
- Deploy mysql and attach existing mysql volume
- root@ubuntu-agent:~# docker container run -it -v b0804229a825075de91262b5190f33d05654825ce4b961be496f481f24509200:/var/lib/mysql --name sql-server -e MYSQL_ALLOW_EMPTY_PASSWORD=true mysql /bin/bash
- check the location of volume in image and define (/var/lib/mysql)
- Delete volume:
- root@ubuntu-agent:~# docker volume rm 83015dff574c6be8f029a41d1bd1916ab3b0d803bcb0336cbea00c46a3b4ca03

- Stop: $docker container stop container_ID
- Remove the container: $docker container rm container_ID
- Delete the volume: $docker volume rm volume_ID
- Project: MongoDB: volume mount for database using docker compose
- Docker Compose to set up a MongoDB container and a MongoDB Express (Mongo-Express) container.
- Create a Ubuntu VM.
- Install Docker
- Install Docker compose
- Create docker compose file:
- $vim docker-compose.yml
-
version: '3'
services:
mongodb:
image: mongo
container_name: mongodb
networks:
- mongo-network
ports:
- "27017:27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=123
mongo-express:
image: mongo-express
container_name: mongo-express
networks:
- mongo-network
ports:
- "8081:8081"
environment:
- ME_CONFIG_MONGODB_SERVER=mongodb
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=123
- ME_CONFIG_BASICAUTH_USERNAME=admin
- ME_CONFIG_BASICAUTH_PASSWORD=123
networks:
mongo-network:
driver: bridge
- check code with YAML Lint.
- We define two services: mongodb and mongo-express.
- The mongodb service uses the official MongoDB image and specifies a named volume mongodb_data for persisting MongoDB data.
- We set environment variables for the MongoDB container to create an initial admin user with a username and password.
- The mongo-express service uses the official Mongo Express image and connects to the mongodb service using the ME_CONFIG_MONGODB_SERVER environment variable.
- We also set environment variables for the Mongo Express container to configure it.
- Now, navigate to the directory containing the
docker-compose.yml
file in your terminal and run:
- $docker-compose up
- Docker Compose will download the necessary images (if not already downloaded) and start the MongoDB and Mongo Express containers. You can access the MongoDB Express web interface at http://localhost:8081 and log in using the MongoDB admin credentials you specified in the docker-compose.yml file.
- The data for MongoDB will be stored in a Docker named volume named
mongodb_data
, ensuring that it persists even if you stop and remove the containers.
- To stop the containers, press
Ctrl+C
in the terminal where they are running, and then run:
- $docker-compose down
- This will stop and remove the containers, but the data will remain in the named volume for future use. create a new container and mount the volume.
Create Containers: tomcat, ubuntu, nginx, httpd, mysql, jenkins, apache2, python application,
Tomcat
Jenkins
- Step 1: #docker pull jenkins/jenkins (The jenkins image has been deprecated for over 2 years in favor of the jenkins/jenkins:lts image provided and maintained by the Jenkins Community as part of the project's release process. Use image:>
docker pull jenkins/jenkins>
docker run -p yourportNo:8080 --name=jenkins-master -d jenkins/jenkins)
- Step2: #docker run --name myjenkins -p 9090:8080 -d jenkins/jenkins
- Step3: access jenkins: open browser http://publicIP:9090 or in VM if through docker host IP in browser http://192.168.171.100:9090
- Step4: Recover password:
- Get into jenkins and get bash prompt as you are in docker
- #docker exec - it myjenkins bash
- #cat /var/jenkins_home/secrets/initialAdminPassword (password will be displayed)
- You can acces jenkins home directory either mapping jenkins home directory (/var/jenkins_home) to your machine's local file system or
- specify --volume option in the run command (--volume jenkins-data:/var/jenkins_home), can access through terminal or
ubuntu
- #docker run --name mynginx -P -d ubuntu:14.04
- #docker container ls
- To get into container: #docker exec -it docker_ID /bin/bash
- code
Nginx
- #docker run --name mynginx -P -d nginx (docker will do port mapping of default port to any port >30,000, if you use small p then define port number of your choice)
- creating container and assigning port automatically using -P (capital P) in detach mode and pull image. Port mapping is done for appications which can be access through browser.
- $docker container ls (get port number)
- To get into container: #docker exec -it docker_ID /bin/bash
- Access through browser: http://192.168.171.100:32678 (port mapping has been defined while creating containers, it can be access from outside using host IP address with port number)
- code
httpd
- #docker run --name webserver -P -d httpd
- #docker container ls (take port number)
- To get into container: #docker exec -it docker_ID /bin/bash
- open browser and use with host IP http://192.168.171.100:port_number
- code
mysql
- It requires environment variable while creating container.
- #docker run --name mysql -d -e MYSQL_ROOT_PASSWORD=India123 mysql:5 (it download the image mysql version 5 and create container)
- To open interactive terminal in bash
- #docker exec -it mysql /bin/bash
- you will be in mysql container and to connect to mysql database
- #mysql -u username -p
- you wil get mysql prompt
- mysql> show databases; (it will show databases information_schema, mysql, performance_schema, sys)
- mysql> use sys;
- create a database (google.com search emp and dept table for mysql (https://justinsomnia.org/2009/04/the-emp-and-dept-tables-for-mysql/) copy code
- paste it so it will create two tables.
- mysql> select * from emp;
- mysql> select * from dept;
- code
Apache2
- Create a ubuntu container in interactive mode and get into container:
- $docker container run -it ubuntu /bin/bash
- update ubuntu:
- root@8b75a0cba7e2:/# apt-get update
- Install apache2 in the container:
- root@8b75a0cba7e2:/# apt-get install apache2
- Go to html folder to make changes in index.html
- root@8b75a0cba7e2:/#cd /var/www/html
- root@8b75a0cba7e2:/var/www/html#echo "Welcome to Star Distributors / Training " > index.html
- Start apache2 service (httpd)
- root@8b75a0cba7e2:/var/www/html#service apache2 start (make sure you are inside container and in html foder)
- Get more details of container with inspect command
- root@ubuntu:~#docker container inspect 8b75a0cba7e2

- Access web site
- Error:
- Check resource utilisation by container:
- $docker container top container_ID
- $docker container stats container_ID
- $free -h
Python application
- Following is the python appliction using flask module:
- flask is the dependency which you install in your system:
- $apt install python3-pip
- $pip install flask
Mongo DB
- $docker run -d -p 27017:27017 --name star_mongoDB -e MONGO_INITDB_ROOT_USERNAME=admin -e MONGO_INITDB_ROOT_PASSWORD=123 mongo
Code
Code
Code
Registry: Docker, ECR(Elastic Container Registry of AWS), Push & Pull images,
Docker Public Repository
Docker Private Repository
- You create a cutom image with required o/s, applications, plugin, extensions, etc. It can be shared online so that particular person can use and deploy after entering credentials.
- Create an account with docker hub and upload your images. hub.docker.com (username = aziz27uk or aziz27uk@yahoo.co.uk, pass : Y......!
- In private repository users required username and password to enter to download image.
- code
Docker Alpine Repositiry
- In this repository images are in very low in size whereas original size of official images are very big.
- code
- code
Upload image and deploy from Docker Registry
- Create a image
- Exporting container which creates .tar file
- root@ubuntu-agent:~# docker contaiener export 0fea384b2d41 > abc1.tar (instead of > you can use -o )
- root@ubuntu-agent:~# docker contaiener export 0fea384b2d41 -o abc2.tar
- root@ubuntu-agent:~#ls (abc1.tar and abc3.tar created).

- Create image by importing the .tar file and deploy.
- root@ubuntu-agent:~#docker image import abc1.tar starimage1 (image with the name starimage1 will be created)

- root@ubuntu-agent:~# docker image ls
- root@ubuntu-agent:~# docker container run -it starimage1 /bin/bash

- Create tag for the image: tag name should be unique, create with name used to create account with docker hub (aziz27uk)
- $docker tag starimage1 aziz27uk/ubuntu_apache

- $docker image ls (check tag is applied)
- Upload image(starimage1)
- $docker image push star/ubuntu_apache (public by default)

- Error: request acces to resource is denied
- Login to Docker Hub:
- $docker login (enter credentials)

- $docker image push star/ubuntu_apache

- image pushed, check in docker hub

- Deploy container using custom image (aziz27uk/ubuntu_apache)
- root@ubuntu-agent:~# docker container run -it -name webserver aziz27uk/ubuntu_apache:latest /bin/bash
- first it will check the local repository, if image is not found than it will check docker hub.

- data1 folder was created before image is created.
- code
- code
Own Private Registry at on-prem
- Instead of using Docker reigstry to upload and share images, you can create your own registry on one of the computer and share with other branches of the company. Other branches can access with the public IP address.

- Lab:
- Create a VM
- Install docker
- $apt-get update
- $sudo su - (root user access)
- $sudo apt install docker.io
- Pull docker registry image from docker hub
- $docker image pull registry
- $docker image pull nginx
- $docker image pull ubuntu
- $docker image ls (3 images: registry, nginx, ubuntu)
- $docker container ls
- Deploy registry image (registry works at 5000 port)
- root@ubuntu-agent:~# docker container run -itd -p 5000:5000 --name star_registry registry
- root@ubuntu-agent:~#docker container ls
- check any images in the private registry:
- $root@ubuntu-agent:~#apt-get install elinks
- open browser and run http://127.0.0.1:5000/v2/_catalog

- There is no images in the registry.
- Push image nginx to private registry:
- root@ubuntu-agent:~# docker image tag nginx 127.0.0.1:5000/nginx
- root@ubuntu-agent:~#docker image ls (it will show all images including tagged one)

- Now push the tagged registry into private repository
- root@ubuntu-agent:~#docker push image 127.0.0.1:5000/nginx

- open browser and run http://127.0.0.1:5000/v2/_catalog

- nginx image has been pushed to private registry.
- VM's IP address is 192.168.171.100 or public IP address of VM
- root@ubuntu-agent:~#ifconfig

- Tag it:
- root@ubuntu-agent:~# docker image tag nginx 192.168.171.100:5000/nginx
- Push it to local repository:
- root@ubuntu-agent:~#docker image push 192.168.171.100:5000/nginx

- The above interface accept secure https only, either bypass/skip https or install certificate.
- bypass/skip http:
- install certificate for HTTPS:
- create certificate:
- root@ubuntu-agent:~#mkdir certificate
- root@ubuntu-agent:~# openssl req -new -newkey rsa:2048 -nodes -keyout -sha256 -keyout cert/domain.key -x509 -days 365 -out cert/domain.crt
- Country Name (2 letter code) [xx]: enter
- state of province name (full name) []: enter
- locality time (ex, city) [Default city]: enter
- Organization Name (ex, company) [Default Company Ltd]: enter
- Organizational Unit Name (ex, section) []: enter
- Common Name (eg, your name or your server's hostname []: repo.docker.local
- Email Address: []: enter
- cd /etc/docker
- mkdir cert.d
- cd cert.d
- mkdir repo.docker.local:5000
- cd
- cd cert/
- cp domain.crt /etc/docker/cert.d/repo.docker.local\:5000/ca.crt
- cd
- systemctl restart docker
- root@ubuntu-agent:~# docker container run -d -p 5000:5000 --name secure_registry -v $(pwd)/cert/:/cert -e REGISTRY_HTTP_TLS_CERTIFICATE=/cert/domain.crt -e REGISTRY_HTTP_TLS_KEY=/cert/domain.key registry
- root@ubuntu-agent:~# docker image tag nginx repo.docker.local:5000/nginx
- docker image ls
- root@ubuntu-agent:~# docker image push repo.docker.local:5000/nginx
- Error: private_registry12.jpg
- Edit /etc/hosts and add 192.168.171.100 repo.docker.local
- root@ubuntu-agent:~# docker image push repo.docker.local:5000/nginx
- code
- code
ECR - Elastic Container Registry (AWS)
- Create a Ubuntu VM and install Docker.
- $sudo apt update
- $sudo apt install docker.io (Install docker from local repository)
- Run hello-world to check docker installation:
- $docker pull hello-world (only root user can access docker, assign permission to user)

- When you install docker a group is created with the name docker, whoever added to this group can run the docker commands, by default root user is added.
- When you create a user a group is also created with the same name of user, this group is primary group to this user. you can add additional group to this user
- Add user (abdul) to docker group:
- $sudo usermod -aG docker abdul (-a is used to assign as a secondary group, -A is use for primary group, G is used for group, docker is group, abdul is user), log off and log on. or run $newgrp docker (it will add user to docker)
- Start docker Service:
- $systemctl start docker or
- $sudo service docker start
- $systemctl enable docker
- $docker info
- Check docker status:
- $sudo docker service status
- Login to AWS and search Elastic Container Registry:
- search elastic container registry.
- create a repository.
- Repository name:
- 565393050355.dkr.ecr.eu-west-2.amazonaws.com/star_ecr (565393050355.dkr.ecr.eu-west-2.amazonaws.com = namespace, star_ecr = repository name).
- Encryption configuration:
- AES-256: Industry standard Advanced Encryption Standard (AES) encryption
- AWS KMS: AWS Key Management Service (KMS)
- Create
- ecr1.jpg
- click star_ecr to check images in this repository:
- ecr2.jpg
- click view push commands to get commnads to push the images to this repository from docker.
- ecr3.jpg
- First need to install AWS CLI to give the credentials of aWS on machine from where you push the commands.
- $curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" click for url.
- $unzip awscliv2.zip
- $sudo ./aws/install
- $aws configure
- It is not a best practice to push and pull images to repository using root user. create a user and give permission and perform task.
- Create a user:
- Go to IAM>user>create a user, enter name and click Provide user access to the AWS Management Console, select i want to ceate IAM user. Custom password, next
- By default this user added to Admins group, Assign policy so that this user can push / pull images from private repository =
- click attach policies directly and choose Amazon EC2ContainerRegistryFullAccess.
- click create.
- Click on created user and go to security credentials>Access keys> select command line interface (CLI)>next>create access key and copy both access and secret key.
- Login to AWS Console with CLI:
- $aws configure
- AWS Access key ID: AKIAYHJANJLZRSJXLAVY
- AWS Secret Access key: EJ6oegC3e2O2ZnaCF9L/1T4lcJVemC1o16kPvSLUZ
- Default Region Name: eu-west-2
- Default output format: table
- Now copy the command and paste it.
- Retrive an authentication token and authenticate your Docker client to your registry.
- aws ecr get-login-password --region eu-west-2 | docker login --username AWS --password-stdin 565393050355.dkr.ecr.eu-west-2.amazonaws.com
- Error: An error occurred (InvalidSignatureException) when calling the GetAuthorizationToken operation: Signature not yet current: 20250601T172510Z is still later than 20250601T123631Z (20250601T122131Z + 15 min.)
Error: Cannot perform an interactive login from a non TTY device
- Solution: sudo apt-get install ntp, sudo service ntp start
- aws ecr get-login-password --region eu-west-2 | docker login --username AWS --password-stdin 565393050355.dkr.ecr.eu-west-2.amazonaws.com
- Login Succeeded.
- $docker images (list of images )
- Create tag for the image:
- $docker tag secret_santa:v1 565393050355.dkr.ecr.eu-west-2.amazonaws.com/star_ecr:latest
- $docker images
- 565393050355.dkr.ecr.eu-west-2.amazonaws.com/star_ecr:latest
- Push image to ECR private repository:
- $docker push 565393050355.dkr.ecr.eu-west-2.amazonaws.com/star_ecr:latest (with this tag docker will push image to AWS private registry)
- Check in the AWS ecr images
- Pull the image from ECR private registry:
- $aws configure list (check credetials added)
- copy the url from AWS ecr image: 565393050355.dkr.ecr.eu-west-2.amazonaws.com/star_ecr:latest
- $docker pull <aws_account_id>.dkr.ecr.<region>.amazonaws.com/<repository_name>:<tag>
- $docker pull 565393050355.dkr.ecr.eu-west-2.amazonaws.com/star_ecr:latest
- Create the container using the AWS ECR private registry image:
- $docker run -d --name star_secretsanta -p 8081:8080 565393050355.dkr.ecr.eu-west-2.amazonaws.com/star_ecr:latest
code
Docker Network
Intro
- When you install a docker on a VM, it creates a docker bridge.
- docker0 interface will be created and assign a range of IP 172.17.0.0/16 where 172.17.0.1 will be use for gateway.
- $ifconfig (run on docker host )
- VM on which docker is installed is called docker host and it has a different private/public IP address (ex.,192.168.171.100)
- This VM will have interface eth0 or ens33 and has IP address 192.168.171.100
- When you access the container within this network you can connect directly with container IP address (from 172.17.0.0/16 range)
- If need to connect from outside than you hit docker host IP address 192.168.171.100 and it is mapped with container IP address.
- If you have multiple containers and need to access from outside than docker host IP is already mapped with a container, so you configure port map and access from outside.
- Perform port configuration where a port number is bind with container IP, to access container from outside the network you hit docker host IP with port number and it will connect you to container.
- $docker network ls
- It will show three networks:
- bridge:
- After docker installation a bridge network created.
- $docker network inspect bridge_ID (To get the range of IP pool)

- host:
- none:
- Virtual Interface:
- It makes communication between docker host and container.
- It is visible only when container is in running.

- $docker network inspect bridge_ID (it will show containers are running on this network and IP etc.

- $route -n
Container hostname, DNS, Route
- Get into running container:
- root@ubuntu-agent:~# docker exec -it 920b4d076d84 /bin/bash
- container hostname:
- root@920b4d076d84:/# hostname
- container DNS:
- root@920b4d076d84:/# cat /etc/resolv.conf
- Routing Table:
- root@920b4d076d84:/# route -n
Create a network, custom IP range, create container with specific IP
- root@ubuntu-agent:~# docker network --help
- root@ubuntu-agent:~# docker network create star_network1
- Network is created: 53814025c4a2743eee335e51ac2058b6957167f69fa66edc8af454c77670d581
- root@ubuntu-agent:~# docker network inspect star_network1
- It has assign network subnet "172.18.0.0/16" (default is 172.17.0.0 and assing next network range)
- Disconnect container from a network:
- root@ubuntu-agent:~#docker network disconnect star_network2 containerID.
- Delete network:
- root@ubuntu-agent:~# docker network rm star_network1
- Assign custom IP range:
- root@ubuntu-agent:~# docker network create star_network2 --subnet=10.10.10.0/24
- IP range:
- root@ubuntu-agent:~# docker network inspect star_network2
- "Subnet": "10.10.10.0/24"
- Create a container in the above network:
- root@ubuntu-agent:~# docker container run -it --network=star_network2 ubuntu:14.04
- check IP address assign:
- root@d9f4cfc451a4:/# ifconfig

- Create a container in the above network with a specific IP address:
- root@ubuntu-agent:~# docker container run -it --network=star_network2 --ip 10.10.10.50 ubuntu:14.04
- root@7aa6235915bf:/# ifconfig
- inet addr:10.10.10.50 Bcast:10.10.10.255 Mask:255.255.255.0
- Containers of same network can ping but cannot ping with different network.
- ping 172.17.0.2 cannot ping
- ping 10.10.10.x/24
Communication between different network containers with/without port mapping

- Create two networks
- Create first network star_network1 (10.10.10.0/24)
- root@ubuntu-agent:~# docker network create star_network1 --subnet=10.10.10.0/24
- root@ubuntu-agent:~# docker network ls
- root@ubuntu-agent:~# docker network inspect 34c63a095cf6
- Create second network star_network2 (20.20.20.0/24)
- root@ubuntu-agent:~# docker network create star_network1 --subnet=20.20.20.0/24
- Deploy ubuntu:14.04, install apache2 container on both networks:
- First container deployed on First network star_network1
- root@ubuntu-agent:~# docker container run -it --network=star_network1 --ip 10.10.10.100 -p 5000:80 ubuntu:14.04 (portmapping: 5000 will be host port and 80 will be container port to access appliction)
- root@434c3d51f440:/# ifconfig
- inet addr:10.10.10.100 Bcast:10.10.10.255 Mask:255.255.255.0
- root@434c3d51f440:/#apt-get update
- root@434c3d51f440:/# apt-get install apache2
- root@434c3d51f440:/# cd /var/www/html
- root@434c3d51f440:/var/www/html# echo "Welcome to site created on star_network1 with the IP 10.10.10.100" > index.html
- root@434c3d51f440:/var/www/html# service apache2 start
- Second container deployed on second network star_network2
- root@ubuntu-agent:~# docker container run -it --network=star_network2 --ip 20.20.20.200 -p 6000:80 ubuntu:14.04 (portmapping: 6000 will be host port and 80 will be container port to access appliction)
- root@67cf40ae74c1:/# ifconfig
- inet addr:20.20.20.200 Bcast:20.20.20.255 Mask:255.255.255.0
- root@67cf40ae74c1:/#apt-get update
- root@67cf40ae74c1:/# apt-get install apache2
- root@67cf40ae74c1:/# cd /var/www/html
- root@67cf40ae74c1:/var/www/html# echo "Welcome to site created on star_network2 with the IP 20.20.20.200" > index.html
- root@67cf40ae74c1:/var/www/html# service apache2 start
- root@67cf40ae74c1:~# apt-get install wget (to access the site on 10.10.10.100:5000)
- Access the site with port mapping:
- root@67cf40ae74c1:~# wget 10.10.10.100 (it will not connect as trying to access from container 2 an application running on container1)
- To access hit dockerhost IP with port 5000
- root@67cf40ae74c1:~# wget 192.168.171.100:5000

- Try from first container to access application on second container.
- root@434c3d51f440:~#apt-get install wget
- root@434c3d51f440:~# wget 192.168.171.100:6000
- Access without port mapping
- code
Communication between two containers, frontend and backend in MongoDB
- ForntEnd container = MongoExpress: using URL access the mongoDB
- BackEnd container = MongoDB : Actual database which will be access through front end.
- code
- code
code
code
Docker File
Intro
- It is a smiple text file in which you define the following keywords(case sensitive):
- FROM: Used to specify the base image from which the docker file has to be created.
- MAINTAINER: This represents name of the organization or the author who created this docker file.
- CMD: This is used to specify the initial command that should be executed when the container starts.
- ENTRYPOINT: Used to specify the default process that should be executed when container starts. It can also be used for accepting arguments from the CMD instruction.
- RUN: Used for running linux commands within the container. It is generally helpful for installing the software in the container.
- USER: Used to specify the default user who should login into the container.
- WORKDIR: Used to specify default working directory in the container.
- COPY: Copying the files from the host machine to the container.
- ADD: Used for copying files from host to container, it can also be used for downloading files from remote servers.
- ENV: Used for specifying the environment variables that should be passed to the container.
- EXPOSE: Used to specify the internal port of the container.
- VOLUME: Used to specify the default volume that should be attached to the container.
- LABEL: Used for giving label to the container.
- STOPSIGNAL: Used to specify the key sequences that have to be passed in order to stop the container.
Docker file examples
Writing Dockerfile for Java Based Application where artifact (.jar) file is generated with maven.
- Create a ubuntu VM
- Clone the project from git hub into a ubuntu machine.
- To build this application we should have jdk and maven.
- Install jdk:
- Install maven:
- root@master:~/secretsanta# sudo apt install maven -y (it will install maven and jdk together)
- root@master:~/secretsanta#mvn -version
- root@master:~/secretsanta#java --version
- Build package with maven:
- $mvn package (it will package the code along with dependencies and generate artifact which will be executable of the package)
- $cd /target
- secretsanta-0.0.1-SNAPSHOT.jar file is created
- dockerfile_javabased1.jpg
- Create a Dockerfile to deploy container using the above .jar file: we required a base image of linux (ubuntu) and jdk17 to run this application. we do not need maven as we have .jar file already.
- $vim Dockerfile
-
FROM openjdk:8u151-jdk-alpine3.7 (search alpine linux based with jdk image in docker hub)
EXPOSE 8080
ENV APP_HOME /usr/src/app (Defining an environment where APP_HOME is the varible for path /usr/src/app app folder will be created)
COPY target/secretsanta-0.0.1-SNAPSHOT.jar $APP_HOME/app.jar (copying contents of secretsanta-0.0.1xxx.jar to app.jar)
WORKDIR $APP_HOME (work directory will be /usr/src/app)
ENTRYPOINT exec java -jar app.jar (or CMD ["java","-jar","app.jar"] (as soon as container is created the command will run)
- Build image:
- root@master:~/secretsanta# docker build -t secret_santa:v1 .
- Error: - LegacyKeyValueFormat: "ENV key=value" should be used instead of legacy "ENV key value" format (line 5)
- JSONArgsRecommended: JSON arguments recommended for ENTRYPOINT to prevent unintended behavior related to OS signals (line 11)
- Solution: ENV APP_HOME=/usr/src/app
- Solution: CMD ["java","-jar","app.jar"]
- make changes in the Dockerfile for both lines and run build again
- root@master:~/secretsanta# docker images
- secret_santa:v1 image created
- Deploy image:
- root@master:~/secretsanta#docker run -d --name star_secretsanta -p 8081:8080 secret-santa:v1
- root@master:~/secretsanta#docker container ls
- root@master:~/secretsanta#docker container inspect container_ID
- Access application in browser:
- Define ubuntu, jdk, maven in the Dockerfile
- In the above scenario we have created jar file and than create image using Dockerfile. Here we will define maven in Dockerfile.
Writing Docker file for NodeJS Based Application
- Create a ubuntu VM
- Clone the project from git hub into a ubuntu machine.
- Create a Dockerfile:
- root@master:~/nodejs-webapp-public# vim Dockerfile
- FROM node:alpine
COPY ./ ./
RUN npm install
EXPOSE 8081
CMD ["npm", "start"]
- FROM node:alpine (base image of nodejs from alpine)
- COPY ./ ./ (copying all contents of nodejs application to container and run app.js from container)
- RUN npm install (installing dependencies which are mentioned in package.json)
- EXPOSE 8081 : port defining
- CMD ["npm","start"]
- Build Image :
- root@master:~/nodejs-webapp-public# docker build -t nodejs_webapp:v1 .
- root@master:~/nodejs-webapp-public#docker images
- nodejs_webapp:v1 image is created.
- Deploy Image:
- root@master:~/nodejs-webapp-public# docker run -d --name star_nodejsapp -p 8084:8081 nodejs_webapp:v1
- port 8081 is defined in the code to run this application in the container, 8084 is the port mapping for host.
- root@master:~/nodejs-webapp-public# docker container ls
- root@master:~/nodejs-webapp-public# docker container inspect container_ID
- Access application:
- http://192.168.171.200:8084/
Code
Creating a custom image ubuntu with Dockerfile:
- Creating a custom image to deploy on containers, it is created with the help of docker file.
- Pull ubuntu base image:
- root@ubuntu-agent:~# docker image pull ubuntu:14.04
- root@ubuntu-agent:~# docker image ls
- root@ubuntu-agent:~# docker image history 13b66b487594
- docker_file1.jpg
- create a file $vim Dockerfile (if you use different name than at the time of deployment you use -f filename)
- Build Image 1:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:1 . (t is for tag, . dot means file at current path)
- dot has been given at the end to define Dockerfile is at current path, if used custom name than we should define -f filename
- custom ubuntu image with the name star_ubuntu:1 has been created.
- docker_file2.jpg
- both IMAGE ID are same as we have not made any changes in building custom image so there is no additional layer in the image.
- Create a file test123 in the new custom image of ubuntu.
- edit Dockerfile:
- root@ubuntu-agent:~# vim Dockerfile
- FROM ubuntu:14.04
RUN touch test123
- save & exit
- Build an image 2:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:2 .
- root@ubuntu-agent:~# docker image history 3e46222c9568
- Image ID has been changed and if run history there is one layer added /bin/bash -C touch test123, remaining 5 layers are same as there in base image.
- docker_file3.jpg
- Deploy custom image:
- root@ubuntu-agent:~# docker container run -it --name myserver star_ubuntu:2
- docker_file4.jpg
- Run apt-get update, apt-get install apache2, tree, openssh-server & client, change directory to /html and change index.html, service apache 2 start.
- edit Dockerfile:
- root@ubuntu-agent:~# vim Dockerfile
- FROM ubuntu:14.04
RUN touch test123
RUN apt-get update && apt-get install -y apache2 # update and install apache2
RUN apt-get install -y tree openssh-server openssh-client #install tree and openssh
RUN cd /var/www/html #change directory
RUN echo "Welcome to Star Distributors " > /var/www/html/index.html #change index.html file
RUN service apache2 start # start apache2 service
- save & exit
- Build an image 3:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:3 .
- Deploy using star_ubuntu:3
- root@ubuntu-agent:~# docker container run -it --name webserver1 star_ubuntu:3 .ls
- Code
- Code
Layer Architecture or cache
- Create a custom image of ubuntu and display layers.
- pull ubuntu:14.04 image:
- root@ubuntu-agent:~# docker image pull ubuntu:14.04
- create a file
- $vim Dockerfile
- layer_architecture1.jpg
- FROM ubuntu:14.04
RUN touch abc
RUN touch abc1
- In the docker file three stages has been defined, use ubuntu:14.04 base image, create abc and abc1 files.
- save & exit
- Build image 4
- root@ubuntu-agent:~# docker image build -t star_ubuntu:4 .
- It shows steps 1/3, 2/3, 3/3
- In step 1/3 it pull centos: It creates a container 13b66b487594
- In step 2/3 it create abc file: it creates a container 718f57b3b44c and ran command touch abc and removed container
- In step 3/3 it creates abc1 file: it creates a container 97c3934dbdca and ran command touch abca and removed container
- layer_architecture2.jpg
- root@ubuntu-agent:~# docker image ls
- image with image_ID 50872258475b
- Add more task in the docker file to check layers behaviour:
- $vim Dockerfile
- add in the file
- RUN touch abc2
RUN touch abc3
- save & exit
- Build Image 5:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:5 .
- In step 2 & 3 it did not create container, it took from cache
- step 4 & 5 performed by creating containers.
- layer_architecture3.jpg
- Add more task in the docker file to check layers behaviour:
- $vim Dockerfile
- add in the file
- RUN apt-get update -y && apt-get install httpd -y && apt-get install tree -y
- save & exit
- Build Image 6:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:6 .
- For new task it will create container and after task completion container will be deleted. Task done earlier will take it from cache.
- Create image without cache:
- When you create the image using the same dockerfile it takes the task from the cache previously done and run new tasks. If you want everytime it should run all task from beginning, define in the command --no-cache
- root@ubuntu-agent:~# docker image build -t star_ubuntu:1 . (it creates an image star_ubuntu 1) run the same command again.
- root@ubuntu-agent:~# docker image build -t star_ubuntu:1 . (it creates an image star_ubuntu2 but took task which was done earlier from cache).
- layer_architecture7.jpg
- If you do not want task to be taken from cache due to some bugs in the code and trying to build image with all tasks to test.
- root@ubuntu-agent:~# docker image build --no-cache -t star_ubuntu3 . (it will run all task define in the dockerfile)
Environment Variable:
- $vim Dockerfile
- ENV Name=stardistributors
ENV Pass=docker@2020
- Instead of entering the values in the code eveytime, we call it with variable name.
- Build:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:8 .
- Test Environment variable:
- root@ubuntu-agent:~# docker container run -it star_ubuntu:8
- root@58590fd3c0ce:/# env
- Use the Environment value:
- we create a user and set password by calling variables
- $vim Dockerfile
- RUN useradd $Name && echo "$Name:$Pass" | chpasswd
- Build:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:9 .
- Deploy:
- root@ubuntu-agent:~# docker container run -it star_ubuntu:9
- change user
- root@489649978bc8:/# su stardistributors
- code
login to container with normal user:
- When you create a container and take interactive terminal it logged in as root, log in with a user
- $vim Dockerfile
- Save & Exit
- Build:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:10 .
- Deploy:
- root@ubuntu-agent:~# docker container run -it star_ubuntu:10
- stardistributors@e6dce761fb30:/$ Logged in with user
set PWD/Working Path for container
- After deploying container, pwd is set to /var/www/html
- $vim Dockerfile
- save & exit
- Build
- root@ubuntu-agent:~# docker image build -t star_ubuntu:11 .
- Deploy:
- code
Copy / ADD / CMD into container
- Copy a text file into container:
- root@ubuntu-agent:~# touch data123
- root@ubuntu-agent:~# vi Dockerfile
- COPY data123 /var/www/html
- save & exit
- Build:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:12 .
- Deploy
- root@ubuntu-agent:~# docker container run -it star_ubuntu:12
- layer_architecutre5.jpg
- Copy a tar file into container:
- Create a tar file of /etc/
- root@ubuntu-agent:~# tar -czvf docker.tar.gz /etc/
- root@ubuntu-agent:~#ls
- root@ubuntu-agent:~# vi Dockerfile
- COPY docker.tar.gz /var/www/html
- save & exit
- Build:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:13 .
- Deploy
- root@ubuntu-agent:~# docker container run -it star_ubuntu:13
- ADD docker.tar.gz
- It will extract the tar file and copied into destination
- root@ubuntu-agent:~# vi Dockerfile
- ADD docker.tar.gz /var/www/html
- save & exit
- Build:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:14 .
- Deploy
- root@ubuntu-agent:~# docker container run -it star_ubuntu:14
- root@ubuntu-agent:~#ls
- ADD URL
- it download all html files from internet into container destination
- root@ubuntu-agent:~# vi Dockerfile
- ADD https://nicepage.com/ht/79260/business-strategy-agency-html-template# /var/www/html
- save & exit
- Build:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:15 .
- Deploy
- root@ubuntu-agent:~# docker container run -it star_ubuntu:15
- root@ubuntu-agent:~#ls
- Run Tree command in container
- Make sure tree package is intalled in container and defined in Dockerfile
- root@ubuntu-agent:~# vi Dockerfile
- Build:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:16 .
- Deploy
- root@ubuntu-agent:~# docker container run -it star_ubuntu:16
- Install python and get python termial
- root@ubuntu-agent:~# vi Dockerfile
- RUN apt-get update && apt-get install -y python
CMD ["python"]
- save & exit
- Build:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:17 .
- Deploy
- root@ubuntu-agent:~# docker container run -it star_ubuntu:17
- If you define two CMD in Dockerfile than last will execute (tree)
- root@ubuntu-agent:~# vi Dockerfile
- RUN apt-get update && apt-get install -y python
CMD ["python"]
CMD ["tree"]
- save & exit
- Build:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:18 .
- Deploy
- root@ubuntu-agent:~# docker container run -it star_ubuntu:18
SSH configuration for containers
- Define SSH configuration in Dockerfile for image so that you can SSH the containers.
- $docker contaiener ls or : check there is no port defined
- $docker contaiener inspect container_ID : check there is no port defined
- root@ubuntu-agent:~# vi Dockerfile
- RUN apt-get install -y openssh-server
RUN apt-get install -y openssh-client
RUN mkdir -p /var/run/sshd
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
- save & exit
- Build:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:19 .
- Deploy:
- root@ubuntu-agent:~# docker container run -itd star_ubuntu:19
- Test:
- root@ubuntu-agent:~# docker container ls (check port 22/tcp is configured0
- root@ubuntu-agent:~# docker container inspect 88a278ba4ea8 (to get IP address of container =172.17.0.2)
- root@ubuntu-agent:~# ssh root@172.17.0.2
- layer_architecture6.jpg
- Port Mapping:
- To access the container from outside the network, need to perform port mapping.
- port mapping with docker host IP (192.168.171.100), if it is public IP than it can be access from outside with port.
- root@ubuntu-agent:~# docker container run -it -p 5000:22 star_ubuntu:19
- ssh -p 5000 root@192.168.171.100
- Check ports are open:
- sudo netstat -plunt
- sudo ufw allow 22
Custom Name for dockerfile and located in different folder
- By default we use name Dockerfile to create docker file and build the image, instead of Dockerfile we can use custom name.
- While creating the image we defind dot . which means file is at current path and it check Dockerfile name and execute.
- Suppose Dockerfile moved to other folder (images) than execute the command by defining the path
- root@ubuntu-agent:~# docker image build -t star_ubuntu:19 -f /images/Dockerfile .
- or
- root@ubuntu-agent:~# docker image build -t star_ubuntu:19 -< /images/Dockerfile .
- Build image with custom file name.
- root@ubuntu-agent:~# mkdir images
- root@ubuntu-agent:~# cd images
- root@ubuntu-agent:~/images# vim file1
- FROM ubuntu:14.04
RUN touch abc
- Build image 20:
- root@ubuntu-agent:~/images# docker image build -t star_ubuntu:20 -f file1 . (current in the images folder and file1 is in the same folder)
- code
Build image without creating dockerfile
- root@ubuntu-agent:~# docker image build -t star_ubuntu:21 -<<EOF
FROM ubuntu:14.04
RUN apt-get update
RUN touch abc123
EOF
- Image will build star_ubuntu 21
- Code
Build image with file located remotely.
- file is uploaded on https://stardistributors.co.uk/devops/devops_tools/docker/Dockerfiles/docker_image_file1.txt
- Build image:
- root@ubuntu-agent:~# docker image build -t star_ubuntu:22 -f https://stardistributors.co.uk/devops/devops_tools/docker/Dockerfiles/docker_image_file1.txt .
- code
- code
Docker init
- Python application:
- It works as Dockerfile or replacement of Dockerfile.
- python application (app.py) & requirements.txt file on root home folder.
- app.py
-
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello, World!'
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=8000)
- requirements.txt
- Flask==2.1.0
Werkzeug==2.2.2
gunicorn==20.1.0
- To use docker init you need Docker Desktop 4.26.1 and later. check docker installation seciton.
- $docker init
- What application platform does your project use? select python and enter
- What version of Python do you want to use? (3.10.12) enter
- What port do you want your app to listen on? (8000) enter
- ? What is the command you use to run your app (e.g., gunicorn 'myapp.example:app' --bind=0.0.0.0:8000)? python3 app.py
- docker_init2.jpg
- check Dockerfile for more details: $vim Dockerfile
- Start your application by running
- $docker compose up --build
- docker_init3.jpg
- Your application will be available at http://localhost:8000
- Node JS Application:
- create a file $vim app.js
- // app.js
const express = require('express');
const app = express();
const port = 3000;
app.get('/', (req, res) => {
| res.send('Hello, World');
});
app.listen(port, () => {
| console.log('Server running on http://localhost:${port}');
});
- create a package-lock.json
- create a package.json
- $docker init
- docker_init1.jpg
- What application platform does your project use? select node and enter
- What version of Node do you want to use? (23.7.0)
- Which package manager do you want to use? npm is automatically selected
- What command do you want to use to start the app? (npm start)
- What port does your server listen on? 3000
- docker_init4.jpg
- compose.yaml, Dockerfile, README.Docker.md files are created.
- $docker compose up --build
- Docker Scout:
Multi Stage Docker File - java based application
- In this method we divide the task in multiple stages. It is very useful when performing troubleshooting where you do not need to run all stages of dockerfile.
- Click for Multistage-Dockerfile-Java Github Repository.
- In the Dockerfile-ubuntu: Single Stage
- FROM ubuntu
RUN apt update -y &&\
apt install default-jdk -y &&\
java -version
RUN apt install maven -y &&\
mvn -version
RUN apt install -y wget
RUN wget https://archive.apache.org/dist/tomcat/tomcat-9/v9.0.8/bin/apache-tomcat-9.0.8.tar.gz
RUN mkdir /opt/tomcat &&\
mv apache-tomcat-9.0.8.tar.gz /opt/tomcat &&\
tar -xvzf /opt/tomcat/apache-tomcat-9.0.8.tar.gz
RUN groupadd tomcat &&\
useradd -s /bin/false -g tomcat -d /opt/tomcat tomcat
RUN cd /opt/tomcat/apache-tomcat-9.0.8 &&\
chmod 777 conf bin &&\
chown -R tomcat webapps/ work/ temp/ logs/ bin/
ENTRYPOINT ["sh","/opt/tomcat/apache-tomcat-9.0.8/bin/startup.sh"]
- base image defined,
- run update and install jdk,
- install maven,
- install wget,
- install tomcat to run the application.
- untar tomcat package
- useradd tomcat
- assign ownership to user
- ENTRYPOINT
- In the above dockerfile, many stages has been defined and will take longer time to finish and if there is any error than it start from top, it may complete from cache but perform all tasks. for every stage it creates a container and perform the task and delete the conainer.
- The size of the image will be very big, we divide into multiple stages in different dockerfiles. Divide this code into two stages.
- Dockerfile: Multi Stage
-
# Stage-1 Build
FROM maven as maven
RUN mkdir /usr/src/mymaven
WORKDIR /usr/src/mymaven
COPY . .
RUN mvn install -DskipTests
# Stage-2 Deploy
FROM tomcat
WORKDIR webapps
COPY --from=maven /usr/src/mymaven/target/java-tomcat-maven-example.war .
RUN rm -rf ROOT && mv java-tomcat-maven-example.war ROOT.war
- In stage 1, we take image of maven which also contain jdk. We create a folder and copying all files from local host to container and run the build to create artifact(.war) file.
- In stage 2, we take image of tomcat which is pre-installed, copying .war file and paste it in current directory (--from=maven, this is define for stage1), run
- Clone the repository:
- root@master:~# git clone https://github.com/AbdulAziz-uk/Multistage-Dockerfile-Java.git
- Build Image with Single Stage Dockerfile:
- root@master:~/Multistage-Dockerfile-Java# docker build -t single_stage_image -f Dockerfile-ubuntu .
- root@master:~/Multistage-Dockerfile-Java#docker images (check the size of image)
- Build Image with Multi Stage Dockerfile:
- root@master:~/Multistage-Dockerfile-Java# docker build -t multi_stage_image -f Dockerfile .
- root@master:~/Multistage-Dockerfile-Java#docker images (468 mb)
- Deploy container with Single stage image:
- Deploy container with multi stage image:
- root@master:~/Multistage-Dockerfile-Java# docker run -d -p 8080:8080 --name web_app1 multi_stage_image
Multi Stage Docker File - NodeJS based application
- Click for Multi Stage Dockerfile NodeJS based application Github Repository.
- Dockerfile: Multi Stage
-
FROM node:12.13.0-alpine as build
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 3000
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/build /usr/share/nginx/html
- Clone the repository:
- Build image:
- root@master:~/Multistage-Dockerfile-NodeJS# docker build -t multistage_nodejs -f Dockerfile .
- root@master:~/Multistage-Dockerfile-NodeJS#docker images (size is 193 MB)
- Deploy image:
- root@master:~/Multistage-Dockerfile-NodeJS# docker run -d -p 3000:3000 --name nodejs_app multistage_nodejs
- Access application:
- open browser http://192.168.171.200:3000
- Code
Code
Docker Compose
Intro
- Create multiple containers with different resources can be possible with docker compose, it is a yaml/json file where you define different configuration for containers.
- The same compose file can be used to delete the resoruces.
- It works similar like AWS cloudformation.
- Rquirement:
- Docker should be installed and daemon must be start and enable.
- Docker compose package should be installed.
code
code
code
code
code
code
Docker Swarm
code
code
code
code
code
code
code
Integrate Docker with Jenkins,
Integrate Docker with Jenkins
- Create a ubuntu VM
- Update repository:
- Install JDK:
- $java (type java and enter, if java is not installed,it gives options to install)
- $ sudo apt install openjdk-17-jre-headless
- $java --version
- Install Git:
- Install jenkins
- sudo wget -O /etc/apt/keyrings/jenkins-keyring.asc \
https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
echo "deb [signed-by=/etc/apt/keyrings/jenkins-keyring.asc]" \
https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
/etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update
sudo apt-get install jenkins
- paste the above and enter
- $sudo systemctl start jenkins
- $ sudo systemctl enable jenkins
- $sudo systemctl status jenkins
- $jenkins --version
- Access jenkins:
- open browser and enter http://ip:8080
- Get password from the defined location and enter.
- Install Docker:
- $sudo apt install docker.io
- $sudo service docker restart
- $sudo chmod 666 /var/run/docker.sock (all the users of the computer will have access to run docker commands)
- $sudo systemctl restart docker
- $docker --version
- Install plugins: Docker
- Go to Manage jenkins>plugin>available plugins>search docker
- select docker pipeline, docker plugin, docker-build-step
- Configure Tools, their locations and automatic installers: JDK, Git, Maven, Docker
- Go to manage jenkins > tools >
- JDK installations: Name=jdk17, select install automatically and select install from adoptium.net and choose jdk-17.x.x (click Add JDK to install different version of JDK)
- Git installations: Name=Default, (git was installed so take it default)
- Maven installation: name=maven3, install from apache, select version
- Docker installation: name = docker, select install automatically, select Download from docker.com, docker version = latest
- apply and save.
- Credentials: Configure credentials for Git, docker etc
- Manage jenkins>Security>Credentials>System>Global credentials> Add Credentials:
- Docker:
- scope = Global (Jenkins, nodes, items, all child items, etc)
- username = username of hub.docker.com(aziz27uk)
- Password = enter password
- ID=Docker_hub
- Description=Docker_hub
code
code
code
code
code
code
Projects
Secretsanta (Java Based Application): create an image of secretsanta with docker and deploy on containers. (Ubuntu environment).
- Create a Package of secretsanta which can be deployed on following environment..
- Create Environment:
- secretsanta application on github
- Create ubuntu VM
- Install jdk 11:
- $java (it suggest command to install java, install openjdk-11-jre-headless)
- Install Maven: convert code into artifact
- Install Git:
- Clone application:
- Build package: to get jar file
- root@master:~/secretsanta#mvn package
- BUILD SUCCESS
- root@master:~/secretsanta#cd target
- secretsanta-0.0.1-SNAPSHOT.jar
- Deploy the package:
- root@master:~/secretsanta/target# java -jar secretsanta-0.0.1-SNAPSHOT.jar
- Access the application on browser:
- Create a Image of secretsanta which can be deployed on any environment:
- Create ubuntu VM
- Install docker:
- $sudo su - (root user access)
- $sudo apt install docker.io
- $docker pull hello-world (only root user can access docker, assign permission to user)

- Bydefault docker command will not run without sudo, to run with other user provide the permission to user by adding to docker group.
- When you install docker a group is created with the name docker, whoever added to this group can run the docker commands, by default root user is added.
- When you create a user a group of is also created with the same name of user, this group is primary group to this user. you can add additional group to this user
- $sudo usermod -aG docker username (-a is used to assign as a secondary group, -A is use for primary group, G is used for group, docker is group, abdul is user), log off and log on. or run
- $newgrp docker (it will add user to docker)
- Create a Dockerfile:
- vim Dockerfile
-
FROM openjdk:8u151-jdk-alpine3.7 (This image form alpine includes linux ubuntu and light weighted, search alpine ubuntu with jdk)
EXPOSE 8080
ENV APP_HOME /usr/src/app (variable APP_HOME defining)
COPY target/secretsanta-0.0.1-SNAPSHOT.jar $APP_HOME/app.jar (copying secretsanta-0.0.1xxx to app.jar)
WORKDIR $APP_HOME (work directory)
ENTRYPOINT exec java -jar app.jar (CMD also can be used) or you can also write ["java","-jar","app.jar"]
- Build Docker Image:
- root@master:~/secretsanta# docker build -t appsanta:v1 .
- Error: - LegacyKeyValueFormat: "ENV key=value" should be used instead of legacy "ENV key value" format (line 5)
- JSONArgsRecommended: JSON arguments recommended for ENTRYPOINT to prevent unintended behavior related to OS signals (line 11)
- Solution: ENV APP_HOME=/usr/src/app
- Solution: CMD ["java","-jar","app.jar"]
- root@master:~/secretsanta#docker images (appsanta image is created)
- Deploy container using image:
- root@master:~/secretsanta# docker run -d --name santa -p 8080:8080 appsanta:v1
- root@master:~/secretsanta#docker ps
- root@master:~/secretsanta#docker container inspect docker_ID (to get IP and port number)
- Upload image to docker Registry/Repository public:
- Login to Docker Hub:
- root@master:~/secretsanta#docker login (login to docker hub, enter credentials)
- Create tag for the image:
- Tag name should be unique, create with name used to create account with docker hub (aziz27uk)
- $docker tag appsanta:v1 aziz27uk/secret_santa (tag name is secret_santa for the image appsanta:v1)
- $docker images (aziz27uk/secret_santa listed)
- Push image to Dockerhub:
- $docker push aziz27uk/secret_santa
- Check in Docker Hub:
- Login to hub.docker.com and enter credentials and check image has been pushed.
- Deploy container using image uploaded on docker hub:
- root@ubuntu-agent:~# docker run -d --name santa -p 8090:8080 aziz27uk/secret_santa (host port is 8090 and contaiener port 8080, image aziz27uk/secret_santa uploaded on docker hub)
- Access the application on browser:
- Change host port and deploy another container:
- root@master:~/secretsanta# docker run -d --name santa1 -p 8082:8080 appsanta:v1
- You can change host port but cannot change container port as it runs on tomcat which runs on port 8080, java applications runs by default on tomcat which runs on 8080.
- Now two applications are running on two different containers and can be accesses with different ports in browser.
- code
Secretsanta (Java Based Application): Jenkins CI/CD piepeline with docker to create image, push to docker hub and deploy on container (Ubuntu environment)
Secretsanta (Java Based Application): Jenkins CI/CD piepeline with maven to create package, push to docker hub and deploy on VM (linux environment)
Web-App (NodeJS Application): Create an image of NodeJS-Web-App with Docker and deploy on docker container (ubuntu).
- Nodejs_web_app contain following files:
- Dockefile (defined base image, run commands to create docker image)
- README.md
- about.html
- app.js (code of node js)
- index.html
- package.json (requirements/dependencies defined)
- Create a ubuntu VM
- Update repository:
- Install JDK:
- $java (type java and enter, if java is not installed,it gives options to install)
- $ sudo apt install openjdk-17-jre-headless
- $java --version
- Install Git:
- Install Docker:
- $sudo apt install docker.io
- $sudo service docker restart
- $sudo chmod 666 /var/run/docker.sock (all the users of the computer will have access to run docker commands)
- $sudo systemctl restart docker
- $docker --version
- Clone source code from Github
- Create Dockerfile:
- root@master:~/nodejs-webapp-public# vim Dockerfile
- FROM node:alpine
COPY ./ ./
RUN npm install
EXPOSE 8081
CMD ["npm", "start"]
- FROM node:alpine # base image define from alpine for lightweight.
- COPY ./ ./ # copy all source file to destination
- RUN npm install #install npm
- EXPOSE 8081 # expose port 8081 for host.
- CMD ["npm", "start"] # it will start npm
- save & exit
- Build image :
- root@master:~/nodejs-webapp-public# docker build -t nodejs-webapp:v1 .
- root@master:~/nodejs-webapp-public#docker images
- Deploy container:
- root@master:~/nodejs-webapp-public# docker run -d --name star_nodesjs_webapp -p 8081:8080 nodejs-webapp:v1
- Access the Application:
- $docker container ls
- $docker container inspect container_ID (take IP address and port number)
- open browser http://172.17.0.2/8081
- nodejs_webapp1.jpg
- code
Web-App (NodeJS Application): Jenkins CI/CD create an image of NodeJS-Web-App with Docker and deploy on docker container (ubuntu).
- Nodejs_web_app contain following files:
- Dockefile (defined base image, run commands to create docker image)
- README.md
- about.html
- app.js (code of node js)
- index.html
- package.json (requirements/dependencies defined)
- Create a ubuntu VM
- Update repository:
- Install JDK:
- $java (type java and enter, if java is not installed,it gives options to install)
- $ sudo apt install openjdk-17-jre-headless
- $java --version
- Install Git:
- Install jenkins
- sudo wget -O /etc/apt/keyrings/jenkins-keyring.asc \
https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
echo "deb [signed-by=/etc/apt/keyrings/jenkins-keyring.asc]" \
https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
/etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update
sudo apt-get install jenkins
- paste the above and enter
- $sudo systemctl start jenkins
- $ sudo systemctl enable jenkins
- $sudo systemctl status jenkins
- $jenkins --version
- Access jenkins:
- open browser and enter http://ip:8080
- Get password from the defined location and enter.
- Install Docker:
- $sudo apt install docker.io
- $sudo service docker restart
- $sudo chmod 666 /var/run/docker.sock (all the users of the computer will have access to run docker commands)
- $sudo systemctl restart docker
- $docker --version
- Install plugins: Docker
- Go to Manage jenkins>plugin>available plugins>search docker
- select docker pipeline, docker plugin, docker-build-step
- Configure Tools, their locations and automatic installers: JDK, Git, Maven, Docker
- Go to manage jenkins > tools >
- JDK installations: Name=jdk17, select install automatically and select install from adoptium.net and choose jdk-17.x.x (click Add JDK to install different version of JDK)
- Git installations: Name=Default, (git was installed so take it default)
- Maven installation: name=maven3, install from apache, select version
- Docker installation: name = docker, select install automatically, select Download from docker.com, docker version = latest
- apply and save.
- Credentials: Configure credentials for Git, docker etc
- Manage jenkins>Security>Credentials>System>Global credentials> Add Credentials:
- Docker:
- scope = Global (Jenkins, nodes, items, all child items, etc)
- username = username of hub.docker.com(aziz27uk)
- Password = enter password
- ID=Docker_hub
- Description=Docker_hub
- Create CI/CD pipeline:
- New Item:
- item name=secret_santa, select pipeline and ok
- General:
- Discard old build: Max # of builds to kepp = 3
- Pipeline:
- Pipeline script: select Hello World script to start writing script.
- In the script we have define to take source code from Github, use docker to build image using Dockerfile by giving . and upload on docker hub and create container using image.
- nodesjs_webapp_script
-
pipeline{
agent any
tools{
jdk"jdk17"
maven"maven3"
}
stages{
stage('Source_Code'){
steps{
git branch: 'main', url:'https://github.com/AbdulAziz-uk/nodejs-webapp-public.git'
}
}
stage('Build,upload image to docker hub & deploy image on docker container'){
steps{
scripts{
withDockerRegistry(credentialsId: 'Docker_Hub', toolName: 'docker'){
sh "docker build -t aziz27uk/nodejs_webapp:v1 ."
sh "docker push aziz27uk/nodejs_webapp:v1"
sh "docker run -d --name star_nodejs_webapp -p 8081:8080 aziz27uk/nodejs_webapp:v1"
}
}
}
}
}
}
- Build Now: run ci/cd pipeline
- Access application:
- open browser http://172.17.0.2:8081
- Code
Game 2048
- create a docker file using vscode from ubuntu terminal:
- root@master:~/game-2048# sudo code /Dockerfile --user-data-dir='.' --no-sandbox
- Docker Image:
- Docker Container:
- code
- code
- code
- code
3 Ways to create Docker Image for Java Applications:
Docker integration with Jenkins, Full Stack Pipeline, Install, maven, Trivy, Sonarqube, nexus
- Create a VM and install jdk, git, jenkins, maven.
- Update repository:
- Install JDK:
- $java (type java and enter, if java is not installed,it gives options to install)
- $ sudo apt install openjdk-17-jre-headless
- $java --version
- Install Git:
- Install jenkins
- sudo wget -O /etc/apt/keyrings/jenkins-keyring.asc \
https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
echo "deb [signed-by=/etc/apt/keyrings/jenkins-keyring.asc]" \
https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
/etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update
sudo apt-get install jenkins
- paste the above and enter
- $sudo systemctl start jenkins
- $ sudo systemctl enable jenkins
- $sudo systemctl status jenkins
- $jenkins --version
- Access jenkins:
- open browser and enter http://ip:8080
- Get password from the defined location and enter.
- Install suggested plugins.
- Install maven:
- Create a VM ubuntu and install jdk 17, docker, Sonarqube.
- Update repository:
- Install JDK: Required jdk 17 for sonarqube
- $java (type java and enter, if java is not installed,it gives options to install)
- $ sudo apt install openjdk-17-jre-headless
- $java --version
- Install Docker:
- $sudo apt install docker.io -y
- Give permission to user to run docker commands
- $sudo usermod -aG docker username
- $newgrp docker
- Create docker container of sonarqube.
- $docker pull sonarqube:lts-community
- $docker run -d --name sonar -p 9000:9000 sonarqube:lts-community
- Access sonarqube: open browser http://http://192.168.171.200:9000
- username = admin
- password = admin (default)
- Create a VM ubuntu and install docker, Nexus.
- Update repository:
- Install docker
- $sudo apt install docker.io -y
- Give permission to user to run docker commands
- $sudo usermod -aG docker username
- $newgrp docker
- search for docker nexus image in google
- sonatype/nexus3
- $ docker run -d -p 8081:8081 --name nexus sonatype/nexus3
- Access nexus:
- open browser http:192.168.171.200:8081
- login: username=admin , password= retrieve frm /nexus-data/admin.password
- Get into container: root@master:~# docker exec -it ba6eaa7a3698 /bin/bash
- $cd sonatype-work,> $cd nexus> $cat admin.password and copy the password.
- Install plugins in Jenkins: Go to manage jenkins/plugins
- JDK 17: Eclipse Temurin installer
- SonarQube Scanner:
- Docker, Docker pipeline, Docker-buid-step
- OWASP dependency check
- config file provider
- maven integration
- nexus artifact uploader
- pipeline stage view
- Configure the plugins which installed.
- Go to manage jenkins > tools
- JDK Installation: Name=JDK17, select install automatcally, select Install from adoptium.net, select version jdk-17.0.15+6
- Git: Name=Default,
- SonarScanner for MSBuild installations: name=sonar-scanner, install automatically, choose latest version
- Maven Installation: Name=maven3, install automatically, version 3.6.1
- Dependency-Check Installation: Name=DC, install automatically, add installer and select install from github.com, version=dependency-check 6.5.1
- Docker Installation: Name=docker, install automatically, Add installer=run shell command and enter
- sudo apt update
- sudo apt install docker.io -y
- sudo usermod-aG docker username
- newgrp docker
- Tool Home = /usr/
- Create Pipeline: New item
- Enter an item name = Full-Stack-CICD, pipeline
- General: Discard old build = 3 Max # of builds to keep
- Definition: Pipeline Script: select Hello World and make the follwing script.
- Github Repository: BoardgameListingApp
- Click to get yaml file. Full-Stack-CICD.yml
- odcInstallation : 'DC' : performing owasp dependency check installation on DC, it takes time to install NVD (National Vulnerability Database) since 2002 to latest year.
- Configure SonarQube Server:
- Generate token in sonarqube first and copy the token.
- Go to sonar qube/administrations/security/users/Generate tokens/Name=sonar_token Generate, copy the token
- Add the token in the credentials:
- Go to Manage Jenkins/Security/Credentials/Global/+Add Credentials: kind = secret, scope=Global (jenkins,nodes, items, all child items etc, secret = paste token, ID = sonar-token, description = sonar-token.
- Configure SonarQube Server
- Go to Manage Jenkins/System/SonarQube servers/AddSonarQube: Name=sonar, URL=http://192.168.171.200:9000, server authentication token=sonar-token (defined in add the token in the credentials)
- Stuck at Finished Jar Analyser: perform the following POM.xml and and run Build Again.
- Configure POM.xml: Define the repositories of nexus repository in pom file.
- Source code has been uploaded on Github. go to github repository BoardgameListingApp
- In sonar qube/Browse/copy the path. http://192.168.171.200:8081/repository/maven-releases/

- open POM.xml edit and enter the url of Nexus maven-releases and maven-snapshots in <distributionManagement>. commit changes.

- Run Build now.

- Go to SonarQube >Project and check the outcome of Boardgame Bugs and Vulnerabilities.
- Configure Quality Gate:
- Get the token: go to SonarQube/Administration/Configuration/Webhooks/Create:
- Name=Jenkins, URL=http://192.168.171.200:8080/sonarqube-webhook/ , create.
- Add in pipeline script stage: To create the code, open pipeline syntax and search waitForQualityGate: wait for SonarQube analysis to be completed and return quality gate status, Server authentication=sonar-token and click Generate Pipeline script. copy code part waitForQualityGate abortPipeline:false and paste in pipeline script.

-
stage('Quality Gate') {
steps {
waitForQualityGate abortPipeline: false
}
}
- This code will test the Quality Gate in SonarQube.
- Generate the settings.xml file
- Go to Manage Jenkins/Managed Files/ + Add a new Config/ select Global Maven settings.xml, ID=global-maven-settings>next, file is created.
- enter user credentials, two ways, either add Sever Credentials or searc <server> and add, uncomment and add the details
- <server>
<id>maven-releases</id>
<username>admin</username>
<password>Trustu786</password>
</server>
<server>
<id>maven-snapshots</id>
<username>admin</username>
<password>Trustu786</password>
</server>
- Click Submet.
- Upload Artifacts to Nexus: Method1
- Install Plugin if not: Pipeline Maven Integration Plugin than you will see withMaven:Provide Maven environment in pipelinesyntax
- In Pipeline syntax, choose withMaven:Provide Maven environment in pipelinesyntax, Maven=maven3, JDK=jdk17, Global Maven Settings Config =MyGlobalSettings, uncheck Maven Traceability, Generate Pipeline scirpt and copy script.
- Paste the script in Pipeline.
- Define commands to push the artifact to nexus: at //some block
- Go to pipeline syntax and search nexusArtifactUploader: Nexus Artifact Uploader, Nexus Version=NEXUS3, Protocol=HTTP, Nexus URL=192.168.171.200:8081, credentails (provide the credentials of nexus , go to jenkins/credentials/Global/Add Credentials/kind = Username with Password, scope=Global(Jenkins, nodes, items, all child items, etc), username=admin, Password=Trustu786, ID=nexus_cred), (get the groupId details from pom.xml)GroupId=com.javaproject, version=0.0.1, Repository=maven-releases, Atifact Add, ArtifactId=database_service_project, type=jar, File=/var/lib/jenkins/workspace/Full-Stack-CICD/target/database_service_project-0.0.1.jar (go to pipeline/#buildnumber/workspaces /var/lib/jenkins/workspace/Full-Stack-CICD and go to target and copy jar file and paste it here) and clcik generate pipeline script and copy and paste it in pipeline script.
-
stage('Quality Gate' ) {
steps {
waitForQualityGate abortPipeline:false
}
}
stage('Deploy Artifacts to Nexus' ) {
steps {
withMaven(globalMavenSettingsConfig: 'global-maven-settings', jdk: 'jdk17', maven: 'maven3', mavenSettingsConfig: '', traceability: false) {
nexusArtifactUploader artifacts: [[artifactId: 'database_service_project', classifier: '', file: '/var/lib/jenkins/workspace/Full-Stack-CICD/target/database_service_project-0.0.1.jar', type: 'jar']], credentialsId:'nexus_cred',groupId:'com.javaproject', nexusUrl:'192.168.171.200:8081', nexusVersion:'nexus3', protocol:'http', repository:'maven-releases', version:'0.0.1'
}
}
}
- Method 2: Deploy Artificat with mvn package:
-
stage('Deploy Artifacts to Nexus' ) {
steps {
withMaven(globalMavenSettingsConfig: 'global-maven-settings', jdk: 'jdk17', maven: 'maven3', mavenSettingsConfig: '', traceability: false)
sh "mvn package"
}
}
- Either one can be use to deploy artifact.
- Apply & Save
- Run Build now
- Docker Build Image:
- Configure Docker Hub credentials: Go to manage jenkins/credentials/global/new credentials/Scope=Global(jenkins,nodes,items,all child items,etc), username=aziz27uk, password=Yez.... ID=Docker_Hub, Description=Deocker_Hub.
- open pipeline synstex and choose withDockerRegistry:Sets up Docker registry endpoint, Docker registry URL=no need if repository is public, Registry credentials=select aziz27uk/******(Docker_Hub), Docker Installation=select docker and Generate Pipeline Script and copy the script.
- Docker commands alway define in script {}
-
stage('Docker Build Image' ) {
steps {
script {
// This step should not normally be used in your script. Consult the inline help for details.
withDockerRegistry(credentialsId: 'Docker_Hub', toolName: 'docker') {
sh "docker build -t board_cicd:latest ."
sh "docker tag board_cicd:latest aziz27uk/board_cicd:latest"
}
}
}
}
- Trivy to scan docker image for vulnerabilities.
- Install Trivy and configure:
- Two way you can perform, either install the trivy locally in your VM with the following steps or Install through Jenkins tools.
- Step 1: Install Dependencies: Run the following command to install necessary dependencies:
- $sudo apt-get install wget apt-transport-https gnupg lsb-release
- Step 2: Add Trivy Repository Key: Add the Trivy repository key to your system's trusted keyring:
- $wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | gpg --dearmor | sudo tee /usr/share/keyrings/trivy.gpg > /dev/null
- Step 3: Add Trivy Repository: Add the Trivy repository to your system's list of package sources:
- $echo "deb [signed-by=/usr/share/keyrings/trivy.gpg] https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main" | sudo tee -a /etc/apt/sources.list.d/trivy.list
- Step 4: Update Package List: Update the package lists to include the newly added Trivy repository:
- $sudo apt-get update
- Step 5: Install Trivy:
- $sudo apt-get install trivy -y
- $trivy -v
- Install Trivy through Jenkins custom tools.
- Trivy image scan code for pipeline script.
-
stage('Trivy Image Scan' ) {
steps {
sh "trivy image aziz27uk/board_cicd:latest"
}
}
- Push image to Docker Hub.
-
stage('Docker Push Image' ) {
steps {
script {
withDockerRegistry(credentialsId: 'Docker_Hub', toolName: 'docker') {
sh "docker push aziz27uk/board_cicd:latest"
}
}
}
}
- Deploy image to Docker Container:
- Check Dockerfile is there in GitHub, if not than create one.
-
FROM adoptopenjdk/openjdk11
EXPOSE 8080
ENV APP_HOME /usr/src/app
COPY target/*.jar $APP_HOME/app.jar
WORKDIR $APP_HOME
CMD ["java", "-jar", "app.jar"]
-
stage('Deploy application to Docker Container' ) {
steps {
script {
withDockerRegistry(credentialsId: 'Docker_Hub', toolName: 'docker') {
sh "docker run -d -p 8085:8080 aziz27uk/board_cicd:latest"
}
}
}
}
- Run Build now.
- Error: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.29/auth: dial unix /var/run/docker.sock: connect: permission denied
- Solution: run $sudo chmod 666 /var/run/docker.sock or instead of this command which opens it to everyone, use sudo chown root:docker /var/run/docker.sock
- Image uploaded to nexus repository in maven-releases
- Image pushed to docker hub:
- container is created: $docker container ls , $docker container inspect condianer_ID
- Access application: http192.168.171.200:8085 (we have defined in code port 8085)
Code
Code
Code
Code
Code
MongoDB: Communication between two containers, frontend and backend:
- ForntEnd container = MongoExpress: using URL access the mongoDB
- BackEnd container = MongoDB : Actual database which will be access through front end.
- code
- code
MongoDB: volume mount for database using docker compose
- Github repository: cd
- Docker Compose to set up a MongoDB container and a MongoDB Express (Mongo-Express) container.
- Create a Ubuntu VM.
- Install Docker
- Install Docker compose
- Create docker compose file:
- $vim docker-compose.yml
-
version: '3'
services:
mongodb:
image: mongo
container_name: mongodb
networks:
- mongo-network
ports:
- "27017:27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=123
mongo-express:
image: mongo-express
container_name: mongo-express
networks:
- mongo-network
ports:
- "8081:8081"
environment:
- ME_CONFIG_MONGODB_SERVER=mongodb
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=123
- ME_CONFIG_BASICAUTH_USERNAME=admin
- ME_CONFIG_BASICAUTH_PASSWORD=123
networks:
mongo-network:
driver: bridge
- check code with YAML Lint.
- We define two services: mongodb and mongo-express.
- The mongodb service uses the official MongoDB image and specifies a named volume mongodb_data for persisting MongoDB data.
- We set environment variables for the MongoDB container to create an initial admin user with a username and password.
- The mongo-express service uses the official Mongo Express image and connects to the mongodb service using the ME_CONFIG_MONGODB_SERVER environment variable.
- We also set environment variables for the Mongo Express container to configure it.
- Now, navigate to the directory containing the
docker-compose.yml
file in your terminal and run:
- $docker-compose up
- Docker Compose will download the necessary images (if not already downloaded) and start the MongoDB and Mongo Express containers. You can access the MongoDB Express web interface at http://localhost:8081 and log in using the MongoDB admin credentials you specified in the docker-compose.yml file.
- The data for MongoDB will be stored in a Docker named volume named
mongodb_data
, ensuring that it persists even if you stop and remove the containers.
- To stop the containers, press
Ctrl+C
in the terminal where they are running, and then run:
- $docker-compose down
- This will stop and remove the containers, but the data will remain in the named volume for future use. create a new container and mount the volume.
Interview
What is Docker, container vs VM,
- Docker is an open source containerization platform on which we can install any application. It is light weighted fast and use less hardwae resources.
- Application can migrate from one o/s to another by simply copying application.
- Mainly docker is use to build images, deploy applications on containers, high availability.
- Containers are very light weight and they do not have complete o/s, it contains only required libraries and application dependencies, whereas in VM it has complete o/s.
Docker Lifecycle
- If there is a requirement to containerization of an application which start with writing a dockerfile to run the application, like base o/s image, dependencies to run application etc and create an image,
- images will be deployed on containers.
Docker components
- Client: docker CLI, using cli you execute docker commands
- Docker Host: Docker daemon will receive all commands and execute it
- Registry: it is a repository in which images are stored.
Docker ADD vs COPY, Networking types, isolating conainers
- Docker ADD can copy the files from a URL like wget whereas copy can only copy files from host system into the container.
- Networking type:
- Bridge: it is defualt networking in docker, when you install docker on VM (host) it creates a virtual adapter docker0 and assign ip addresses to containers.
- Overlay: when you have multiple host, docker swam or kubernetes.
- Host: containers on same network can communicate with each other whereas communication between different networks which can be done through port mapping, port mapping with host IP.
- MacVlan: It is configured in a special requirements.
- isolating containeres: through creating different networks containers are isolated.
- code
Docker daemon
- Docker is single daemon process, single poing of failure, if it goes down than all the applications are down.
- Doceker daemon runs as a root user, any process running as a root can have security issues.
- Podman can be used as alternative which is not run as root user.
- code
code
code
Videos
Sanjay Dahiya
- Introduction to Docker and Docker Container
- Basic Commands, create, start, stop, remove Containers
- Container port mapping, rename, copy, kill, wait, pause, unpause, prune, export
- Create Image, Push and Pull From Docker HUB
- Docker Volumes, Mount
- Host your own docker registry
- Host your own docker registry Lab Part-1 AWS
- Setup Private Docker Registry_ Secure SSL Certificate- HTTPS
- Dockerfile Part-1
- Dockerfile Part-2
- Dockerfile Part-3
- Dockerfile Part-4
- Dockerfile Part-5
- Dockerfile Part-6
- Docker Compose Part-1
- Docker Compose Part-2
- Docker Compose Part-3
- Docker Network part -1
- Docker Network part -2
- Docker Network part -3
- Docker Network part -4
- Docker Network part -5
- Docker Swarm part-1
- Docker Swarm part-2
code
code
code
code
code
code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
- code
intro, |
hub.docker.com (username = aziz27uk or aziz27uk@yahoo.co.uk, pass : Y......! |
Docker installation: Windows, Linux, user permission, |
Link two containers |
Docker Host, Client, Images, Container, Daemon, Commands, |
wordpress and mysql link |
create container tomcat, jenkins, ubuntu, nginx, httpd, mysql, |
CICD tier3 architecture (jenkins, tomcat, tomcat) (dev, test, prod) |
multi container link, --link, Docker Compose, |
jenkins master and slave, |
Deploy tomcat with docker compose, |
LAMP Architecture, |
Deploy wordpress and mysql with docker compose, |
Testing environment with selenium hub, |
Deploy jenkins master / slave with docker compose, |
Simple Docker Volume, Sharing docker volume/volume container, |
LAMP Architecture with docker compose, |
volume cotainer, |
CICD environment with docker compose, |
docker custom image: docker commit, docker file, scenario1, scenario2, scenario3(cache Busting), scenario4, |
Docker Registry, |
Image Layers, |
Docker swarm, |
|
Docker Introduction:
- Docker can be installed on
- Docker desktop for Windows (windos 10 prof 64 bit, windows 2016 server edition): https://docs.docker.com/docker-for-windows/install/
- Once docker is installed it activates hyper-v and you cannot run another hypervisor.
- once it is installed use powershell to run docker commands
- Linux:
Multi container link: Linking between containers can be done by
- docker --link
- docker compose
- docker networking
- python script
docker run --link: create two containers (busybox, it is an linux o/s flavour) and create link between them.
Scenario 1:
- Step1: #docker run --name bb1 -it busybox
- to come out of interactive terminal without exit/stop container, ctrl p than ctrl q
- Step2: #docker run --name bb2 -it --link bb1:bb1link busybox
- ping bb1 (You are terminal of bb2 and pinging to bb1)
- To ping bb1 to bb2
Scenario 2: create a Development Environment for a wordpress container with mySQL database,
- create 2 containers one of mySQL and another wordpress, create link betwen them,
- wordpress is a php based application used by developers to create a website and it is integrated with database to store clients input.
- Step1: #docker run --name mysql -d -e MYSQL_ROOT_PASSWORD=India123 mySQL:5
- hub.docker.com search mysql, click mysql and in description check einvirnment variable defined.
- click tag to get the versions of mySQL, in the above we are using tag/version 5
- Step2: Get into mySQL container:
- #docker exec -it mysql bash
- Step3: connect to mySQL (check description in mySQL for commands)
-
- #mysql -u username -p
- you wil get mysql prompt
- mysql> show databases; (it will show databases information_schema, mysql, performance_schema, sys)
- mysql> use sys;
- create a database (google.com search emp and dept table for mysql(justinsomnia) and get the code
- copy the emp and dept code and paste it so it will create two tables.
- mysql> select * from emp;
- mysql> select * from dept;
- Step4: create wordpress container and linked with mysql (define port as wordpress can be accessed on browser)
- #docker run --name starwordpress -d -p 9090:80 --link mysql:mysqldatabase wordpress
- Step5: access wordpress in any browser http://publicIP:9090
- database information provide

- submit

Scenario3: Create CICD 3 tier architecture environment in docker container.
- Step1: create jenkins container
- #docker run -- name development -d -p 5050:8080 jenkins/jenkins
- check any browser http://publicIP:5050
- to get password, get into jenkins
- #docker exec -it development bash
- cat /var/jenkins_home/secrets/initialAdminPassword
- Step2: create tomcat container and linked with jenkins (development)
- #docker run --name testing -d -p 6060:8080 --link development:jenkins tomcat:9.0
- Step3: create tomcat container and liniked with jenkins
- #docker run --name production -d -p 7070:8080 --link development:jenkins tomcat:9.0
- #docker container ls (3 conainers running, 1 jenkins and 2 tomcat)
- check testing server on any browser http://publicIP:6060
- check testing server on any browser http://publicIP:7070
Error: Description The origin server did not find a current representation for the target resource or is not willing to disclose that one exists or 404 error
Resolve:
- Get into tomcat container #docker exec testing -it bash
- #cd /usr/local/tomcat
- #cp -r webapps.dis/* webapps
- Now access the tomcat in the browser.
Scenario 4: Create jenkins master and jenkins slave: Refer here
- Step1: #docker run --name master -d -p 6060:8080 jenkins/jenkins
- Step2: #docker run --name slave -it --link master:jenkins ubuntu
- Step3: download slave.jar file from master.
- #wget master:6060/jnlpJars/slave.jar (wget command not found) install wget
- #apt -get update
- #apt-get install wget
- #wget master:8080/jnlpJars/slave.jar
- Step4: Login to Jenkins and install plugin docker
- manage jenkins > manage plugin > Available > docker
LAMP Architecture:
- LAMP architecture environment can be created for developers who are building a website using open source technologies.
- L = Linux o/s
- A = Application Development using php
- M = backend database should run mySQL
- P = Application server run on Apache Tomcat.
- On Linux machine install mySQL, php and tomcat containers and linked with each other.
LAMP Architecture Lab:
- Step1: Install Linux ubuntu instance
- loginto ubuntu instance and install docker
- Step2: create mySQL container
- Step3: create tomcat container and linked with mySQL
- Step4: create php container and linked with mySQL & tomcat
- #docker run --name myphp -d --link mydb:mysql --link apache:tomcat php:7.2-apache
- #docker container ls (3 containers running)
Testing environment with selenium hub: Create selenium hub container, and link it with two node containers(chrome and firefox etc..).
- Testers should be able to run selenuim automation programs for testing the application on multiple browsers.
- hub.docker.com and search selenium/hub image and click on selenium/hub link to get more details.
- # docker run --name hub -d -p 4444:4444 selenium/hub
- In hub.docker.com search selenium/node-chrome-debug ( It is ubuntu container with chrome)
- # docker run --name chrome -d -p 5901:5900 --link hub:selenium selenium/node-chrome-debug
- In hub.docker.com search - selenium/node-firefox-debug ( It is ubuntu container with firefox)
- # docker run --name firefox -d -p 5902:5900 --link hub:selenium selenium/node-firefox-debug
Note: Containers with firefox and chrome are GUI containers. To see the GUI interface to chrome / firefox containers install VNC viewer and access these containers with hostpublicIP:port
- Download and install vnc viewer
- public_ip_dockerhost:5901 for chrome and public_ip_dockerhost:5902 for firefox
Docker Compose:
- Creating multi container architecutre using YAML file in docker compose. To configure/deploye multiple containers and linked each other in one attempt, can be peformed using YAML file. you can define all commands in yaml file and execute.
- Intstall docker compose in docker host: https://docs.docker.com/compose/install/
- # curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- # chmod +x /usr/local/bin/docker-compose
- #docker-compose --version
Deploy a tomcat container with docker compose:
- #vim docker-compose.yml (if you create with other name aziz.yml)
-
--- version: '3' services: mytomee: image: tomee ports: - 5050:8080 ... |
- To check above defined keypair value is correct, go to http://www.yamllint.com
- #docker-compose up -d
- #docker-compose -f aziz.yml up -d (if you have created with other name )
- #docker-compose -f aziz.yml down (delete containers)
Deploy a mysql and wordpress and link each other with docker compose:
- #vim wordpress.yml
-
--- version: '3' services: mydb: image: mysql:5 environment: MYSQL_ROOT_PASSWORD: India123
mywordpress: image: wordpress ports: - 6060:80 links: - mydb:mysql ...
|
- #docker-compose -f wordpress.yml up -d (it will deploy mysql and wordpress and linked each other)
- http://publicIP:6060 (access wordpress)
- #docker -exec -it root_mydb_1 bash (check container name )
- #docker-compose -f wordpres.yml down (delete all containers)
Depoy Master / Slave of jenkins through docker-compose:
- #vim masterslave.yml
- stdin_open: true and tty: true (ubuntu will not be exited)
-
--- services: master: image: jenkins/jenkins ports: - "5050:8080" slave: image: ubuntu stdin_open: true tty: true version: "3"
|
- Access jenkins with publicIP:5050
- Retrieve Administrator pasword:
- #docker exec -it root_master_1 bash
- #cat /var/jenkins_home/secrets/initialAdminPassword
- Passwordless connectivity between master and slave.
Deploy LAMP architecture with docker compose:
L: Linux
A: Appliction Development( php)
M: Backend Database (MySQL)
p: Application Server (Apache Tomcat)
Linux is already installed on AWS instance or docker host, install docker and docker compose on docker host, create a yaml file to deploy php, mysql and apache tomcat.
#vim lamp.yml
--- version: '3'
services: mydb: image: mysql:5 environment: MYSQL_ROOT_PASSWORD: India123
apache: image: tomee ports: - 6060:8080 links: - mydb:mysql
php: image: php:7.1-apache links: - mydb:mysql - apache:tomcat ...
|
- #docker-compose -f lamp.yml up -d
Deploy CICD environment using docker compose where jenkins container linked with two tomcat containers.
Docker volume:
- Simple Docker Volume
- Docker volume container (sharable)
Simple Docker Volume:
- Data of docker container can be store on docker host and can retrieve data after deleting container.
- Step1: create a folder on root:
- #mkdir /dockervolume (it will be a mount point not a location of data store on docker host, like in windows when you attach a usb you get drive D, E etc.. and when you remove usb and attach another usb you get the same drive d or e, so this folder dockervolume is like drive and data storing could be in different location which can be retrieved from docker inspect command.
- Step2: create a container and attach volume
- #docker run --name myubuntu -it -v /dockervolume unubtu
- ctrl p ctrl q
- Step3: logon to ubuntu container and create some files in the mounted /dockervolume. only data store in volume folder will be saved.
- #touch aziz11 aziz12 aziz13 aziz14
- Step4: docker container inspect myubuntu
- locate mount and copy location. data is stored in this location
- Step5: Delete container and go to above location and you can find data files.
Sharing the volume between different containers: we have created docker volume and it will shared with other containers
- #docker run --name myubuntu1 -it --volumes-from myubuntu ubuntu
- #cd dockervolume
- #ls (you will see files aziz11 aziz12 aziz13 aziz14)
- create new files #touch abdul11 abdul12 abdul 13 abdul14
Create another docker container sharing the same volume:
- #docker run --name myubuntu2 -it --volumes-from myubuntu1 ubuntu
- #cd dockervolume
- #ls (you will see files aziz11 aziz12 aziz13 aziz14, and abdul11 abdul12 abdul 13 abdul14)
- #touch mohammed11 mohammed12 mohammed13 mohammed14
connect to myubuntu or myubuntu1 or myubuntu2 and you will get all files in the volume as volume is shred among them, and you will see all files aziz11..... abdul11.....mohammed11...... these files are stored in /ubuntuvolume and will be available in all shared volume.
- #docker container attach containerID (make sure you are in /)
- #cd /dockervolume
- #ls (al files is there aziz11..... abdul11.....mohammed11......)
Before deleting docker containers get the path from myubunt as its volume has been shared, By using path you can retrieve data.
Create volume container/sharable volume container:This volume should be attached with container which has some data.
- #docker volume create myvolume
- #docker inspect myvolume
- "Mountpoint": "/var/lib/docker/volumes/myvolume/_data", (location)
- copy/create any file in the myvolume location
- create a tomcat container and attach myvolume in /temp folder.
- #docker run --name mytomee1 -d -P -v myvolume:/tmp tomee
- #docker exec -it mytomee1 bash
- #cd /tmp
- #ls
- #docker volume rm myvolume (to delete volume)
- #docker volume prune (to delete all volumes)
Docker custom image: Custom image can be created with the following methods:
- Docker commit command
- Docker file
Docker Commit: You create an image of an existing docker container with all software/application installed so it can be used later to create container.
- Step1: Create a container:
- #docker run --name myubuntu -it ubuntu
- #apt-get update
- #apt-get install git -y
- #git --version
- Step2: create an image/snapshot of the above container myubuntu
- docker commit myubuntu ubuntugit (image of myubuntu will be created with the name ubuntugit, this image has git )
- #docker images (ubuntugit image is created)
- Step3: create a container using ubuntugit image.
- #docker run --name starubuntu -it ubuntugit
- #git --version (git is installed in the image)
Docker file: It is a smiple text file in which you define the following keywords(case sensitive):
- FROM: Used to specify the base image from which the docker file has to be created.
- MAINTAINER: This represents name of the organization or the author who created this docker file.
- CMD: This is used to specify the initial command that should be executed when the container starts.
- ENTRYPOINT: Used to specify the default process that should be executed when container starts. It can also be used for accepting arguments from the CMD instruction.
- RUN: Used for running linux commands within the container. It is generally helpful for installing the software in the container.
- USER: Used to specify the default user who should login into the container.
- WORKDIR: Used to specify default working directory in the container.
- COPY: Copying the files from the host machine to the container.
- ADD: Used for copying files from host to container, it can also be used for downloading files from remote servers.
- ENV: Used for specifying the environment variables that should be passed to the container.
- EXPOSE: Used to specify the internal port of the container.
- VOLUME: Used to specify the default volume that should be attached to the container.
- LABEL: Used for giving label to the container.
- STOPSIGNAL: Used to specify the key sequences that have to be passed in order to stop the container.
Scenario1: create a custom image of nginx.
- Create a dockerfile by taking nginx as the base image(image from docker hub) and specify the maintainer as aziz.
- $ sudo su -
- # vim dockerfile
-
FROM nginx MAINTAINER aziz |
- :wq!
- Construct an image from the above dockerfile.
- #docker build -t aziznginx . ( t stands for tag, . stands for current working dir aziznginx is the new image name )
Scenario2: create a custom image of ubuntu and git
Scenario3: cache busting
- Whenever an image is build from a dockerfile, docker reads its memory and checks which steps/instructions were already executed. These steps will not be re-executed.
It will execute only the new instructions. This is a time saving mechanism provided by docker.
- The disadvantage is that as it will not re execute previous steps so any updated package will not be run so we can end up installing software packages from a repository which is updated long time back. (we run apt-get update before installing any package so it will not be updated).
- && will be used re execute previous installed package.
create a docker file and define code to create a ubunutu container, update repository and install git.
FROM ubuntu MAINTAINER logiclabs RUN apt-get update RUN apt-get install -y git |
- build image: #docker build -t myubuntu .

- All steps 1,2,3,4 are performed by pulling details from docker hub.
- Amend docker file and add RUN apt-get install tree
FROM ubuntu MAINTAINER aziz RUN apt-get update RUN apt-get install -y git RUN apt-get install tree |
- build image: #docker build -t myubuntu1 .

- Observe the output, Step 2, 3, 4 is using cache. Only step 5 is executed freshly.
- suppose if you are installing tree package after few months then it is drawback that it will not update repository and install tree on previously updated repository. To avoid this we use && and define in the dockerfile whichever step you want to execute again.
FROM ubuntu MAINTAINER aziz RUN apt-get update && apt-get install -y git tree |
- build image: #docker build -t myubuntu2 .

Scenario4: for CICD environment we install JAVA, jenkins, Git and Maven for java based project, JAVA is mandatory for jenkins but git and maven is the requirement for the code.
- #docker run --name myjenkins -d -P jenkins (it will create jenkins container)
- you can access jenkins home page using publicIP and port number, to get password run exec in bash terminal
- now git and maven is not installed in the container, if you try to install it you will get error do not have permission, if you use sudo you get error sudo command not found.
- To run sudo command user must be in visudo (sudoers ) file. #whoami (add user to sudoers file).
- visudo (error command not found)
- #su - root
- prompt for password for root (password is not set)
- exit to come to docker host (aws instance)
Image Layers:Docker Images are the combination of layers, when you pull an image from docker hub a number of layers downloaded.

- When ou first pull an image in a fresh aws instance a number of layers will be downloaded.
- When you pull second image less number of layers will be downloaded and it goes on further pulls, if layers of first pull image is simlar in second pull image then it will not download in second image and it will only download new layers of second image.
- When you delete first image it will only delete those layers which are not dependant to other images.
Docker Registry: Registry is a location where docker images are saved.
- Public Registry : hub.docker.com
- Private Registry: it is paid version registry.
Create a customize image from container running ubuntu and install git, create an image from container and uploaded into public registry.
- #docker run --name myubuntu -it ubuntu
- #apt-get update
- #apt-get install git
- #exit
Convert container into an image.
- #docker commit myubuntu aziz27uk/starubuntu (image name should start with userID of docker hub, image name is aziz27uk/starubuntu)
- Image is available in docker host which need to upload into docker hub.
- Login to docker hub from terminal.
- #docker login (enter login name and password)
- #docker push aziz27uk/starubuntu
- login to docker hub and check your images in repositories.
Docker Container Orchestration (Docker Swarm):
- It is the process of running docker containers on multiple docker host machines in a distributed environment.
- A single service runs on all containers but thier host machines could be different.
- Docker swarm is the tool used for performing container orchestration.

- create 4 instances and install docker in each machine.
- # curl -fsSL https://get.docker.com -o get-docker.sh
- # sh get-docker.sh
- Change hostname of all machines
- #hostnamectl set-hostname newname
- #bash
- or
- #vim /etc/hostname (remove ip address and enter name)
- #init 6 (restart )
- Initialize docker swam service on Manager machine
- docker swarm init --advertise-addr 172.31.42.135
- it will generate a token which need to run on every worker machine.
- docker swarm join --token SWMTKN-1-27lf3n7xxqy2u3gb61mvfybk51uqjq9hj4m5uwdd4lcgtgafth-0hmrtwzrcyqv4h7cl3euekq68 172.31.44.50:2377
- on manager run : #docker node ls