Skip to content
Her Tech Corner Her Tech Corner
  • Home
  • Learn Spring
    • Spring Frameworks
    • Java Core
  • Microservices
  • Cloud
  • My Journey
    • Machine Learning
    • Linux
    • Algorithm
    • Book Notes
  • Tools I use
Her Tech Corner
Her Tech Corner

Docker Essentials: A Novice’s Journey

Posted on October 7, 2023October 8, 2023 By Hoai Thu

Last updated on October 8th, 2023

In this post, I will share my personal notes on Docker when learning Microservices and Cloud Native architecture. It’s important to understand that this post is not a comprehensive guide but rather a collection of essentials about Docker. Learning isn’t just about reading, it’s about doing.

Let’s kick off with the basics.

Table of Contents

Toggle
  • Virtualization and Containerization
    • Virtualization
    • Containerization
  • Docker Essentials
    • Installing Docker Engine
    • Understanding container images
    • Understanding Docker Containers
    • Managing multiple containers using Docker Compose
  • References

Virtualization and Containerization

Virtualization and containerization play an important role in the foundation of the cloud. The core mechanism behind these technologies – whether in public, private, or hybrid clouds – is to create and provide virtual environments using the underlying hardware, regardless of its physical location. Before jumping into Docker essentials, it’s essential to understand the foundation concepts of virtualization, as they lead us to the concept of cloud computing these days.

Source: Cloud Native in Spring in action

Virtualization

Virtualization is a technique that creates software-based computer resources. What makes virtualization feasible is the hypervisor.

  • A hypervisor is a piece of software that sits on top of the physical server or hosts, enabling VMs to emulate a physical computer. It pulls resources from the physical server, like memory, CPU, and storage, and allocates them to the virtual environments.
  • There are two types of hypervisors:
    • Type 1 – bare-metal hypervisors: These hypervisors are installed directly on top of the physical server, leading to higher efficiency and lower overhead compared to Type 2 hypervisors. Examples include VMware ESXi, Microsoft Hyper-V, and Citrix XenServer.
    • Type 2 – hosted hypervisors: These hypervisors run on top of a host operating system. Examples include VMware Workstation and Oracle VirtualBox.

After installing the hypervisor, you can create VMs. VMs are software-based computers that run on a physical computer. They have their own operating system and applications and operate independently of one another. So we can run different operating systems on different virtual machines. The hypervisor will manage the resources allocated to these virtual environments from the physical server.

Containerization

Containerization is a lightweight version of virtualization. It is a technique that enables multiple instances to run on a single physical host without requiring a hypervisor. Instead, it virtualizes the operating system itself with a piece of software called the container engine.

Application containers, like Docker, are designed to package and execute a single process or service within each container. These containers bundle the required libraries, dependencies, and configuration files, ensuring that the application runs consistently across various environments. This self-contained approach simplifies the deployment and management of applications, making them portable and stable regardless of the underlying system.

Docker Essentials

This part is a note from the book: Cloud Native Spring in Action.

Docker Engine follows a client-server architecture that includes multiple components that work together.

The Docker server contains the Docker daemon, a background process responsible for creating and managing Docker objects like images, containers, volumes, and networks. The machine where the Docker server runs is called the Docker host.

The Docker daemon exposes a REST API so that the Docker client talks to the daemon through that API. The client is command-line based and can be used to interact with the Docker daemon either through scripting (for example, Docker Compose) or through the Docker CLI directly.

Besides the client and server components, the container registry can manage and store Docker images, where we can pull images such as Ubuntu, and PostgreSQL, ….

Source: Cloud Native in Spring in action

Installing Docker Engine

When you install Docker on a Linux, the full Docker Engine software is directly installed on your Linux host machine. The Docker Engine comprises both the Docker client, which is the CLI for interacting with Docker, and the Docker server which is responsible for building, running, and managing containers.

However, when you install Docker Desktop for Mac or Windows, the installation process is different because these operating systems do not natively support the Docker server component. In this case, only the Docker client is installed on your macOS or Windows host machine. The Docker server component is installed on this Linux-based virtual machine. To run Docker containers on macOS or Windows, Docker Desktop sets up a lightweight virtual machine in the background, which runs a Linux operating system. Behind the scenes, when executing Docker commands on macOS or Windows, you are actually communicating with the Docker server on the Linux VM, not your host operating system.

Try to run Docker Desktop in Windows, we can see vmmem the process. It means that the VM for the Docker Desktop is active, and the memory usage displayed corresponds to the memory allocated to the VM running the Docker Server.

Understanding container images

A container image is generated by executing an ordered sequence of instructions (defined in Dockerfile), each creating a layer. Each image is made up of several layers. The final artifact, an image, can be run as a container. Images are usually created based on base images (such as Ubuntu, PostgreSQL, or OpenJDK, …).

The layered architecture enables caching and reusing the unchanged layers when building an image. Docker caches each layer separately, so if you build an image and subsequently make a change to an instruction in the Dockerfile followed by another build, Docker only needs to rebuild the modified layer and the layers built after it. This caching mechanism significantly speeds up the image build process.

Below is a Dockerfile example:


FROM eclipse-temurin:17-jre-alpine
VOLUME /tmp
COPY target/*.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]

It follows a sequence of instructions:

  • First, it uses the instruction: FROM to imply eclipse-temurin:17-jre-alpine as the base image. By default, Docker pulls images from Docker Hub. In this case, it downloads Alpine Linux as a lightweight Linux distribution to reduce image size.
  • Second, using VOLUME to configure storage in Docker to persist data outside the container in /tmp . It creates a mount point /tmp within the container.
  • Third, using COPY to copy all the .jar files (from the target directory of the local build) into the / the root, naming the copied file app.jar.
  • Finally, using ENTRYPOINT ["java","-jar","/app.jar"] command as an entry point to run the container.

Then, we can build an image using the command below:

docker build -t image-name:1.0.0

Understanding Docker Containers

A container is a runnable instance of a container image. You can manage the container lifecycle container from the Docker CLI or Docker Compose: you can start, stop, update, and delete containers.

Below is the command to run a container:

docker run image-name:1.0.0

Managing multiple containers using Docker Compose

When you run a new container, you can use the Docker CLI to interact with the Docker daemon, which checks whether the specified image is already present in the local server. If not, it will find the image on a registry, download it, and then use it to run a container.

Docker has made it easier to build and run a container by commands. So far so good! However, if we have multiple containers to run, it can be challenging to run and manage them. Docker Compose comes into play to solve this problem. It allows us to operate multiple containers in our system within YAML files.

Let’s take a look at the below docker-compose.YAML file:


version: '3.4' # docker compose version

services:
  auth_database:
    image: postgres
    container_name: auth_database
    ports:
      - "5432:5432"
    environment:
      - POSTGRES_PASSWORD=123456
      - POSTGRES_DB=auth_server
    restart: always
  authorization:
    build: ./
    container_name: authorization_service
    ports:
      - "9000:9000"
    environment:
      - AUTH_POSTGRES_USER=postgres
      - AUTH_POSTGRES_PASSWORD=123456
#      - POSTGRES_HOST=auth_database
      - POSTGRES_HOST=host.docker.internal
      - POSTGRES_DATABASE=auth_server
    depends_on:
      - auth_database

In this file, I defined 2 services, one is auth_database one is authorization . For each property, you can find out more in docker docs to get a better understanding of how it is created. However, there are some notes here.

  • The build: ./: Specifies that the Dockerfile for this service is located in the current directory (./), implying that the service will be built from the current directory’s Dockerfile.
  • depends_on: auth_database: is used to imply that auth_database service should be started before the authorization service.
  • host.docker.internal : This is a special DNS name that resolves to the host machine’s internal IP address from within a Docker container. It allows the authorization service to connect to the PostgreSQL database running on the host machine.

Let’s run 2 containers:

docker-compose up -d

Docker has a built-in DNS server that you can rely upon for letting containers in the same network find each other using the container name rather than a hostname or an IP address. If you don’t specify the network, Docker Compose configures both containers on the same network spring-authorization-server-implementation_default by default.


[+] Running 3/3
 ✔ Network spring-authorization-server-implementation_default  Created                                                                           0.1s 
 ✔ Container auth_database                                     Started                                                                                     0.1s 
 ✔ Container authorization_service                             Started                                                                                     0.1s

References

  • Book: Cloud Native Spring In Action – Thomas Vitale
  • Docker Essentials Docs
  • Virtual Machine (VM) vs Docker

I think that is all for Docker basics. The real fun begins when we get our hands dirty and start playing with Docker in the real world!

Related

Cloud CloudDocker

Post navigation

Previous post
Next post

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Categories

  • Algorithm (2)
  • Cloud (1)
  • Database (1)
  • Java Core (1)
  • Learn Spring (1)
  • Machine Learning (1)
  • Microservices (4)
  • Spring Frameworks (6)
  • Tools I use (1)

Recent Posts

  • Leetcode challenges 2023-2024December 12, 2023
  • Docker Essentials: A Novice’s JourneyOctober 7, 2023
  • Spring Cloud Gateway as an OAuth2 ClientSeptember 25, 2023
  • An Introduction to Prompt Engineering with LLMsSeptember 18, 2023
  • A brief overview of API Gateway Pattern in MicroservicesSeptember 7, 2023

Tags

Algorithms AOP API Gateway Authorization Server Cloud Database Design Pattern Docker Java8 Jwt Leetcode LLMs Microservices Oauth2 Prompt Refactoring Spring

©2025 Her Tech Corner | WordPress Theme by SuperbThemes