Extending Go gRPC microservices, part 3: Local Kubernetes cluster
Juuso Hakala | Last updated 24 Oct 2024
Introduction
This is the third part of a blog series where I will be extending the example Go gRPC microservices project developed as part of my thesis. The thesis researches the benefits of using gRPC in synchronous microservice communication, such as improved performance. It can be accessed here. Note that it is written in Finnish. The project can be found on GitHub here. Full source code and documentation is available there.
- Part 1: PostgreSQL database with GORM
- Part 2: Docker images and containers
- Part 3: Local Kubernetes cluster
In this part we will deploy the services to a local Kubernetes cluster using the Docker images that we published in the previous part. We will use minikube to run the local Kubernetes cluster. We will create YAML declaration files and use those to apply resources to the cluster. We will use NGINX Ingress controller to set up a load balancer to route traffic to the services based on configured ingress routes. After everything is up, we will try to send requests to the cluster from the outside. Finally, we will use Skaffold to easily deploy everything to the cluster.
Getting started
Kubernetes is an open source container orchestration platform widely used for deploying microservices. First we will install minikube and kubectl. minikube lets us to run Kubernetes locally. It is great for trying out Kubernetes and running Kubernetes applications in development. kubectl is the Kubernetes command line tool that lets us to operate the cluster, such as deploying applications, by running commands against the cluster’s API server.
Once we have both installed, we can start a cluster.
minikube start
Minikube configures kubectl to use the cluster and default
namespace. We can try to run a kubectl command against the started single node minikube
cluster. This local cluster is not going to be used in production so a single node cluster is fine. In a production cluster, it is recommended to have several worker nodes and deploy pods to those nodes, keeping them separate from the control plane nodes.
kubectl get pods
This will output all the pods but we don’t have any yet.
Helm and NGINX Ingress controller
We will use Helm Kubernetes package manager to install NGINX Ingress controller to our cluster. NGINX Ingress controller will be used to create a load balancer so we can route traffic to the service pods with configured ingress routes. Each service will have its own ingress domain that will be used for routing traffic. With this, we only need one load balancer instead of a separate load balancer for each service. We will later use this load balancer to send requests to the cluster.
I used Linux to download the Helm tarball like so:
# Download
curl -LO https://get.helm.sh/helm-v3.16.2-linux-amd64.tar.gz
# Extract
tar -xvzf helm-v3.16.2-linux-amd64.tar.gz
I recommend to verify the SHA256 checksum found on the installation page before extracting. You can then move the binary to e.g. /usr/local/bin
directory or just add the binary’s directory path to your PATH environment variable so you can call the program from anywhere.
Let’s install Ingress controller with the Helm chart
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx
This adds the repository, updates the repository to get the latest version and installs it to our cluster.
Kubernetes resource files
Let’s create a directory and sub directories for the Kubernetes resource files. We will organize the files for each service.
mkdir k8s && cd k8s
mkdir payment
mkdir inventory
mkdir order
mkdir postgres
Let’s write some YAML resource files that we can use to deploy the services to the cluster. Kubernetes reads these declaration files and creates the appropriate resources with configurations defined in the files. I will use the payment service for demonstration.
k8s/payment/deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: payment
name: payment
spec:
replicas: 1
selector:
matchLabels:
app: payment
template:
metadata:
labels:
app: payment
name: payment
spec:
containers:
- name: payment
image: hakj/go-grpc-microservices-payment:1.0.0
env:
- name: PAYMENT_GRPC_PORT
value: "9000"
This deploys the payment service and creates 1 pod of it. We use the payment Docker image that we pushed to Docker Hub in the previous part to create a container. We also set the payment service’s gRPC port environment variable explicitly for clarification.
k8s/payment/service.yaml:
apiVersion: v1
kind: Service
metadata:
name: payment
labels:
app: payment
spec:
selector:
app: payment
ports:
- name: grpc
port: 9000
protocol: TCP
targetPort: 9000
This creates a Kubernetes service for the payment service so we can access the payment service pods from outside the cluster. It maps traffic to this service’s pods. We configure it to use port 9000.
k8s/payment/ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: GRPC
name: payment
spec:
ingressClassName: nginx
rules:
- host: ingress.payment.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: payment
port:
number: 9000
This creates an ingress resource for the payment service that defines routing rules. The ingress controller routes traffic to this service based on the route path. We specify gRPC as the backend protocol when proxying requests to the service.
Deploying the resources
We can deploy resources to the cluster with the kubectl apply
command:
kubectl apply -f deployment.yaml
Option -f
takes a file that has the Kubernetes resources that we want to deploy. In our cases, it contains the deployment and service resources.
We can also deploy all resources files in a directory by specifying the directory path. For example:
kubectl apply -f k8s/payment
After this we can verify that everything worked. If we now check the pods with
kubectl get pods
we can see that there is 1 pod; the payment service pod. This pod that we have contains the payment service container. In Kubernetes, pods are the smallest deployable units that contain the actual containers. A pod can have 1 or more containers. If we create more instances of the payment service by changing the replicas
field in the deployment YAML file, Kubernetes creates more pods. The number we configure is the desired state which Kubernetes tries to maintain all the time; in our case 1 pod.
We can also use different kubectl command flags to get more information. For example, this would show more information about the pods such as the node it resides in.
kubectl get pods -o wide
We can list all the service resources with
kubectl get services
And deployments with
kubectl get deployments
Sending gRPC requests to the cluster
We use local Kubernetes cluster with minikube so we need to expose the NGINX Ingress controller load balancer IP address. Minikube doesn’t give us a LoadBalancer Kubernetes service natively. We can create a tunnel in a separate terminal and keep it running with
minikube tunnel
This let’s us access the Ingress load balancer IP from the local machine through IP 127.0.0.1
.
We need to map the payment service ingress resource host to this IP so we can call the payment service with its host. Let’s add the following to /etc/hosts
file:
127.0.0.1 ingress.payment.local
Now we should be able to send gRPC requests to the payment service running in the cluster. We have a test data JSON file for payment service that we can use to send some test data in the request. I will use grpcurl to test a request.
cd services/payment/tests/testdata/CreatePayment
grpcurl -plaintext -d @ ingress.payment.local:80 paymentpb.PaymentService/CreatePayment < request_success.json
If everything works, we get a JSON object output without errors like so:
{
"paymentId": "f43f0267-8d1f-4e6e-bec7-26d7eea72ddd"
}
Setting everything up with Skaffold
We successfully deployed the payment service and were able to access it. We can do the same for the other services and add all needed ingress host names to /etc/hosts
. However, there is a lot to do manually and this may be slow for local cluster setup. With Skaffold we can set the whole cluster up with one command and it deploys everything for us.
I installed skaffold on Linux following their documentation:
curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && \
sudo install skaffold /usr/local/bin/
Now we can use skaffold from any directory.
We need to add configurations to a skaffold.yaml
file.
apiVersion: skaffold/v2beta29
kind: Config
metadata:
name: go-grpc-microservices
deploy:
kubectl:
defaultNamespace: default
manifests:
- k8s/postgres/**
- k8s/payment/**
- k8s/inventory/**
- k8s/order/**
This will tell Skaffold to apply all the Kubernetes resource files in the specified directories to our Kubernetes cluster. We can deploy everything with:
skaffold run
Make sure to run it in the project root as that is where the skaffold.yaml
file is.
This deletes all the resources that skaffold run
created:
skaffold delete
We can now test the whole order operation by sending a gRPC request to the order service to see if everything works. The order service should be able to communicate with the inventory and payment services, and the PostgreSQL database should also be available. I will use grpcurl again but with the order service’s ingress.
cd services/order/tests/testdata/CreateOrder
grpcurl -plaintext -d @ ingress.order.local:80 orderpb.OrderService/CreateOrder < request_success.json
This should output a JSON object without errors if everything works:
{
"orderId": "0192b961-c299-79a1-acc1-367f3ef8a63d"
}
This doesn’t install the Ingress controller resource so we need to install that manually before running skaffold. Also make sure to run minikube tunnel
if using minikube and add the ingress hosts specified in the Kubernetes ingress resource files to /etc/hosts
file. Full documentation, Kubernetes resource files and steps to run can be found in the project’s GitHub repository here.
Below is a high-level architecture diagram of what we deployed and how it works.
Summary
In this part we deployed the services to a local Kubernetes cluster and used Skaffold to make the deployment simpler. In the next part we will add monitoring and observability so we can monitor the system and see what is exactly going on when making requests in a distributed system.
Continue reading
- Previous part -> Part 2: Docker images and containers