Extending Go gRPC microservices, part 2: Docker images and containers

Juuso Hakala | Last updated 24 Oct 2024

Introduction

This is the second part of a blog series where I will be extending the example Go gRPC microservices project developed as part of my thesis. The thesis researches the benefits of using gRPC in synchronous microservice communication, such as improved performance. It can be accessed here. Note that it is written in Finnish. The project can be found on GitHub here. Full source code and documentation is available there.

In this part we will write Dockerfiles for the services and build Docker images using them. We can create containers from the images and run them. We will write a Docker Compose YAML file to run all the services in containers by executing a single Docker Compose command. Finally, we will tag the images and push them to an image registry. We will use Docker Hub as the image registry.

Writing Dockerfiles

A Dockerfile is a set of instructions for Docker to build an image of it.

Let’s create Dockerfiles for the services

touch services/payment/Dockerfile
touch services/inventory/Dockerfile
touch services/order/Dockerfile

This will be the content for the payment service’s Dockerfile:

FROM golang:1.23-alpine AS builder
WORKDIR /usr/src/app
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o payment ./cmd/main.go

FROM scratch
COPY --from=builder /usr/src/app/payment ./payment
CMD ["./payment"]

We use multi-staged Docker file here. First we say what base image and tag we will use for the builder stage. We will use golang:1.23-alpine. After this we change the working directory to /usr/src/app where we copy all the needed files in the current working directory. We then build the payment service binary. CGO_ENABLED=0 means that Go will build a statically linked binary that doesn’t depend on any system-specific C libraries. GOOS=linux makes sure the binary will always be built for Linux. We also make sure that the build won’t use packages compiled with CGO enabled.

We use scratch as the final base image. This is basically an empty image with no system libraries of dependencies. We copy the built binary from the builder stage into the image and then run the binary.

After this we do the same for the inventory and order services, but change the binary name for each.

Building Docker images

A Docker image is a template that contains instructions for creating containers of it.

We can build the payment service Docker image like this:

cd services/payment
docker build -t payment .

It will try to find the payment service’s Dockerfile from the current working directory and then starts the build process. If it succeeds without errors, the image should then exist. The name of the image is payment with automatic tag latest, because we didn’t specify a tag explicitly.

We can list all our local Docker images with the following command:

docker image ls

After this we do the same for the other services, but use different image names.

Creating and starting containers

Now that we have Docker images of the services, we can create containers of them. We can create and run containers with the docker run command.

For example:

docker run -p 9000:9000 --rm -it payment

This creates a container of the payment image and runs it. The -p 9000:9000 forwards the host port 9000 to the container port 9000 so we can send gRPC requests to the payment service running inside the container. The --rm option automatically removes the container after it is stopped. The -it runs it in interactive mode. If we don’t specify a name, Docker will assign a random name for the container. Each container also gets a unique ID.

We can specify the name of the container with --name flag

docker run -p 9000:9000 --name payment_container -it payment

We can check the running containers with

docker ps -a

If we want to stop the container, we can use docker stop command

docker stop payment_container

To remove the container, we can use docker rm command

docker rm payment_container

There are many more Docker commands and options to work with containers. This is just a simple demonstration.

Running all the containers at once with Docker Compose

Running the services in containers one by one and passing different command line flags every time we want to run them may not be the best solution. This is where Docker Compose can be handy. We can write a Docker Compose YAML file where we configure how the services will be run in containers. Using this file, we can start containers of all the services with their configurations by running only one command.

services:
  payment:
    build:
      context: ./services/payment/
      dockerfile: Dockerfile
    container_name: payment_container
    env_file:
      - .env.payment
    ports:
      - '${PAYMENT_GRPC_PORT:-9000}:${PAYMENT_GRPC_PORT:-9000}'

  inventory:
    build:
      context: ./services/inventory/
      dockerfile: Dockerfile
    container_name: inventory_container
    env_file:
      - .env.inventory
    environment:
      - INVENTORY_DB_HOST=postgres
    ports:
      - '${INVENTORY_GRPC_PORT:-9001}:${INVENTORY_GRPC_PORT:-9001}'
    depends_on:
      - postgres

  order:
    build:
      context: ./services/order/
      dockerfile: Dockerfile
    container_name: order_container
    env_file:
      - .env.order
    environment:
      - ORDER_INVENTORY_SERVICE_HOST=inventory
      - ORDER_PAYMENT_SERVICE_HOST=payment
    ports:
      - '${ORDER_GRPC_PORT:-9002}:${ORDER_GRPC_PORT:-9002}'

  postgres:
    image: postgres:17.0-alpine
    container_name: postgres_container
    env_file:
      - .env.db
    ports:
      - '${POSTGRES_HOST_PORT:-5432}:5432'
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./scripts/initdb.sql:/docker-entrypoint-initdb.d/initdb.sql

volumes:
  postgres_data:

Here we configure how each service’s container will be created and run. We also load environment variables to the containers. We also copied the PostgreSQL container config here so we can run it as well in the same setup. We went through some of the Docker Compose file configs in the previous part so we are not going to repeat them. You can check it and the official Docker Compose documentation for details.

We can start everything by running

docker compose up

To stop and remove the containers we can run

docker compose down

This is the simplest way to get started. We can create .env files and change the service configurations there by setting environment variables for each service independently. By using ${ENV_VAR:-default_value} syntax, we can specify default values if an environment variable is not set.

Our Docker Compose file builds the service images from the Dockerfiles locally. Next we will push the images to an image registry so we can later pull them into our Kubernetes cluster.

Pushing Docker images to an image registry

When we build Docker images on our machine, they will be stored only locally. We can push them to a remote image registry so other machines can easily pull them and run containers from them. We will use Docker Hub as our image registry.

First we need an access token from Docker Hub to login in order to be able to push images there. We can do it for example with:

docker login -u

It will securely prompt us to type the access token without leaking the input.

Next we will tag the Docker images that we built earlier. This is needed so we can push the images to the right repositories. It also allows us to version the images. We can use the ID of the image we want to tag.

docker tag <image-id> hakj/go-grpc-microservices-inventory:1.0.0
docker tag <image-id> hakj/go-grpc-microservices-payment:1.0.0
docker tag <image-id> hakj/go-grpc-microservices-order:1.0.0

Here hakj is my Docker Hub username and e.g. go-grpc-microservices-inventory is the name of the image registry repository where this image will be pushed. 1.0.0 is the tag.

We can push the images with docker push command:

docker push hakj/go-grpc-microservices-inventory:1.0.0
docker push hakj/go-grpc-microservices-payment:1.0.0
docker push hakj/go-grpc-microservices-order:1.0.0

We can verify that the images were pushed by visiting our Docker Hub repositories page.

Summary

In this part we wrote Dockerfiles for the services, built Docker images and ran containers with both plain Docker and Docker Compose. We also tagged the images and pushed them to a remote image registry. By running the services in containers, we can better isolate them from each other. It is a common way to run modern services in production environments.

Docker Compose can be used easily for simple container orchestration. However, Kubernetes is a better container orchestration solution for microservices and it is widely used for this purpose. It is more advanced and not as easy to use, but it is more suitable for production environments. In the next part, we will deploy the services to a local Kubernetes cluster.

Continue reading