Extending Go gRPC microservices, part 1: PostgreSQL database with GORM
Juuso Hakala | Last updated 24 Oct 2024
Introduction
This is the first part of a blog series where I will be extending the example Go gRPC microservices project developed as part of my thesis. The thesis researches the benefits of using gRPC in synchronous microservice communication, such as improved performance. It can be accessed here. Note that it is written in Finnish. The project can be found on GitHub here. Full source code and documentation is available there.
- Part 1: PostgreSQL database with GORM
- Part 2: Docker images and containers
- Part 3: Local Kubernetes cluster
In this part we will add a PostgreSQL database for the inventory service and replace the currently used in-memory database with it. We will use GORM, an ORM library for Golang, to access and query the database in the inventory service. We will add configurations to change the database connection settings such as host, user and password. Finally, we will use Docker to run a local Postgres instance in a container using Docker Compose. We will do port forwarding so we can access the database inside the container from the host machine. Let’s get started!
Postgres database adapter
First we need the GORM library as a dependency to the inventory service. We can add it with
go get -u gorm.io/gorm
Let’s start by making the product database model
type Product struct {
ProductCode string `gorm:"primaryKey"`
Name string
Description string
UnitPriceCents int32
QuantityInStock int32
CreatedAtMillis int64
UpdatedAtMillis int64
}
This defines product model with GORM which will become products
table in the Postgres database. It will make he ProductCode
the primary key of the table. GORM transforms the field names to snake case so ProductCode
becomes product_code
column in the database table. We use UUID v7 as the primary key in this project. UUID v7 is time-ordered which makes it a lot better for indexing than previous UUID v4. I recommend reading this blog post. It covers the topic really well.
The service uses hexagonal architecture where we have ports that adapters implement. Let’s make a struct for the database adapter.
type PostgresAdapter struct {
db *gorm.DB
}
This adapter should later implement the DBPort interface
type DBPort interface {
GetProductsByCode(ctx context.Context, productCodes []string) ([]*domain.Product, error)
UpdateProductStockQuantities(ctx context.Context, products []*domain.ProductQuantity) error
SaveProducts(ctx context.Context, products []*domain.Product) error
}
Let’s add functionality to create a new instance of this adapter
func NewPostgresAdapter(cfg *config.Config) (*PostgresAdapter, error) {
db, err := gorm.Open(postgres.New(postgres.Config{
DSN: buildDSN(&cfg.DB),
PreferSimpleProtocol: true,
}), &gorm.Config{})
if err != nil {
return nil, fmt.Errorf("database connection error: %v", err)
}
// Auto migrate db schema only in development or testing environments.
// In production it is recommended to manage db schema versioning explicitly
// and to use dedicated migration tools.
if cfg.IsDevelopmentMode() || cfg.IsTestingMode() {
err = db.AutoMigrate(&Product{})
if err != nil {
return nil, fmt.Errorf("database migration error: %v", err)
}
}
return &PostgresAdapter{db: db}, nil
}
func buildDSN(cfg *config.DBConfig) string {
return fmt.Sprintf(
"host=%s user=%s password=%s dbname=%s port=%d sslmode=%s TimeZone=UTC",
cfg.Host,
cfg.User,
cfg.Password,
cfg.DBName,
cfg.Port,
cfg.SSLMode,
)
}
With this we can open a database connection to the database and automatically create the tables in development and testing modes. We use the default connection pool settings, but they can be configured. We also build the DSN (Data Source Name) by reading the configured values. This way we can easily change the database connection settings in different environments.
Using GORM to build database queries
Next we will implement the methods in the DBPort
interface.
func (a *PostgresAdapter) GetProductsByCode(ctx context.Context, productCodes []string) ([]*domain.Product, error) {
var productModels []Product
res := a.db.WithContext(ctx).Where("product_code IN ?", productCodes).Find(&productModels)
if res.Error != nil {
return nil, res.Error
}
var products []*domain.Product
for _, pm := range productModels {
products = append(products, &domain.Product{
ProductCode: pm.ProductCode,
Name: pm.Name,
Description: pm.Description,
UnitPriceCents: pm.UnitPriceCents,
QuantityInStock: pm.QuantityInStock,
CreatedAtMillis: pm.CreatedAtMillis,
UpdatedAtMillis: pm.UpdatedAtMillis,
})
}
return products, res.Error
}
This will select all the products from the database whose product_code
value is in the range of the provided product codes. GORM translates the Go methods to SQL queries under the hood. This query would be something like this in SQL
SELECT * FROM products WHERE product_code IN (?, ...)
Here … means all the other values if the passed string slice contains multiple product codes.
This will bulk update product stock quantities
func (a *PostgresAdapter) UpdateProductStockQuantities(ctx context.Context, products []*domain.ProductQuantity) error {
return a.db.Transaction(func(tx *gorm.DB) error {
for _, product := range products {
now := time.Now().UnixMilli()
if err := tx.WithContext(ctx).Model(&Product{}).Where("product_code = ?", product.ProductCode).Updates(Product{
QuantityInStock: product.Quantity,
UpdatedAtMillis: now,
}).Error; err != nil {
return err
}
}
return nil
})
}
We use a transaction so if one update fails, we won’t end up with inconsistent data. If the operation returns an error, GORM will roll back the transaction automatically.
This will bulk save products
func (a *PostgresAdapter) SaveProducts(ctx context.Context, products []*domain.Product) error {
return a.db.Transaction(func(tx *gorm.DB) error {
for _, product := range products {
productModel := Product{
ProductCode: product.ProductCode,
Name: product.Name,
Description: product.Description,
UnitPriceCents: product.UnitPriceCents,
QuantityInStock: product.QuantityInStock,
CreatedAtMillis: product.CreatedAtMillis,
UpdatedAtMillis: product.UpdatedAtMillis,
}
if err := tx.WithContext(ctx).Save(&productModel).Error; err != nil {
return err
}
}
return nil
})
}
Here we use a transaction again. The method gorm.DB.Save
inserts the data if the primary key doesn’t exist. If it does exist, it updates the record. This is called upsert.
In the main.go file we can replace the in-memory database adapter with the new Postgres adapter inside a function when starting the service
dbAdapter, err := db.NewPostgresAdapter(cfg)
if err != nil {
log.Fatalf("failed to connect to database: %v", err)
}
No need to touch the business logic. Just change the implementation. This makes it really easy to write tests as well. We can just mock the database port interface and create custom implementation specific to certain tests.
Docker Compose file and setup
Now that we have coded the needed parts, we can set up the Postgres database. We will use Docker to create Postgres containers and run them in isolation so we don’t have to install Postgres on the host machine. We can make this easy by writing a Docker Compose file that sets everything up for us and run everything with one Docker Compose command.
First we will create the Docker Compose file to the project root
touch docker-compose-db.yaml
Here is the content
services:
postgres:
image: postgres:17.0-alpine
container_name: postgres_container
env_file:
- .env.db
ports:
- '${POSTGRES_HOST_PORT}:5432'
volumes:
- postgres_data:/var/lib/postgresql/data
- ./scripts/initdb.sql:/docker-entrypoint-initdb.d/initdb.sql
volumes:
postgres_data:
It defines a service called postgres
that uses the postgres
official Docker image with tag 17.0-alpine
. Containers created from this will have the name postgres_container
. It loads an environment variable file .env.db
and loads the environment variables set in this file to the container. It forwards the port specified by environment variable POSTGRES_HOST_PORT
to the container port 5432. It creates a postgres_data
volume to persist database data between container restarts. Finally, it loads the initdb.sql script on container initialization. If a database already exists, it won’t run it again. It creates the needed databases to the database instance, in our case only the inventory database.
initdb.sql file
CREATE DATABASE inventory_db;
Environment variables in the .env.db
POSTGRES_USER=service
POSTGRES_PASSWORD=inventory_psw
Running Postgres container
I used Debian WSL so I needed to set the POSTGRES_HOST_PORT
environment variable on my host, as it didn’t load it to the Docker Compose context from the .env.db file
export POSTGRES_HOST_PORT=5432
Now we can easily create a container and run the Postgres database instance
docker compose -f docker-compose-db.yaml up
After this we can try to run the inventory service and see if it starts without errors and can connect to the database.
cd services/inventory && go run cmd/main.go
We can verify that the products
table was created and the test data was inserted in development mode. We can go inside the container and run psql there
docker exec -it postgres_container bash
psql -U $POSTGRES_USER -d inventory_db
\dt
SELECT * FROM products;
After this we can test that the RPCs still work and the database queries actually work. For now, we need to test manually by sending gRPC requests to the API with e.g. grpcurl or Postman. However, we will write some high quality unit and integration tests in a later part to automate this step.
This is just one way to run Postgres locally. This setup makes it possible to run multiple databases in the same Postgres container so we don’t have to run a separate container for each database. In production we might use a separate managed database service like AWS RDS, so we don’t have to handle the complexities of managing database servers ourselves to speed up the development.
Summary
In this part we added a PostgreSQL database for the inventory service which accessed the database with GORM. We were able to easily create a new database implementation and use it without changing the service’s business logic thanks to hexagonal architecture. We were able to run and test the database with Docker Compose.
In the next part, we will write Dockerfiles for the services so we can build Docker images of them. With the images, we can run the services in containers. This will be needed in an upcoming part, where we will be deploying the project to a local Kubernetes cluster.
Continue reading
- Next part -> Part 2: Docker images and containers