How To Create Golang REST API: Project Layout Configuration [Part 1]

Dan Stenger
4 min readApr 22, 2020

During past couple of years I have worked on few projects written in GO. I noticed that the biggest challenge developers are facing is lack of constraints or standards when it comes to project layout. I’d like to share some findings and patterns that have worked best for me and my team. For better understanding I’ll go through steps of creating a simple REST API.

I’ll start with something that I hope one day will become a standard. You can read more about it here. Let’s name it boilerplate:

mkdir -p \
$GOPATH/src/ \
$GOPATH/src/ \
$GOPATH/src/ \

pkg/ will contain common/reusable packages, cmd/ programs, db/scripts db related scripts and scripts/ will contain general purpose scripts.

No application is built without docker these days. It makes everything much easier, so I’ll use it too. I’ll try not to over-complicate things for starters and only add what’s necessary: persistence layer in form of PostgreSQL database, simple program that will establish connection to database, run in docker environment and recompile on each source code change. Oh, almost forgot, I’ll also be using GO modules! Let’s get started:

$ cd $GOPATH/src/ && \
go mod init

Let’s create a for local development environment:

I don’t want to install/setup PostgreSQL database neither I want any other project contributor to do so. Let’s automate this step with docker-compose. The content of docker-compose.yml file:

I will not be explaining how docker-compose works here, but it should be pretty much self explanatory. Two interesting things to point out. First is ./db/scripts:/docker-entrypoint-initdb.d/in pg service. When I run docker-compose, pg service will take bash scripts from host ./db/scripts folder, place and run them in pg container. Currently there will be only one script. It will ensure that test database will be created . Lets create that script file:

touch ./db/scripts/

Let’s see how that script looks like:

Second interesting thing is entrypoint: ["/bin/bash", "./scripts/"]It installs CompileDaemon the way that go.mod is not affected and same package is not picked up and installed in production later. It also builds our application, starts listening for any changes made to source code and re-compiles it. It looks like this:

Next, I’ll create .env file in root of our project which will hold all environment variables for local development:


All variables with POSTGRES_ prefix will be picked up by our pg service in docker-compose.yml and create database with relevant details. In next step I’ll create config package that will load, persist and operate with environment variables:

So what’s happening here? Config package has one public Get function. It creates a pointer to Config instance, tries to get variables as command line arguments and uses env vars as default values. So it’s the best of both worlds as it makes our config very flexible. Config instance has 2 methods to get dev and test DB connection strings. Next, let’s create db package that will establish and persist connection to database:

Here I introduce another 3rd party package ` which you can read more about here. Again, there’s public Get function that accepts connection string, establishes connection to database and returns pointer to DB instance. Access to database and program configuration is needed all the time across whole application. For easy dependency injection I’ll create another package that will assemble all our mandatory building blocks.

There’s public Get function again, remember, consistency is the key! :) It returns pointer to our Application instance that will hold our configuration and access to database.

I’d like to add another service that will guard the application, listen for any program termination signals and perform cleanup such as closing database connection:

So exithandler has publicInit function that will accept a callback function which will be invoked when program exits unexpectedly or is terminated by user. Now that all basic building blocks are in place, I can finally put them to work:

There’s new 3rd party package introduced which will load env vars from a .env file created earlier. It will get pointer to application that holds config and db connection and listen for any interruptions to perform graceful shutdown. Time for action:

$ docker-compose up --build

OK, now that app is running I want to ensure I have 2 databases at my disposal. I’ll list all running docker containers by typing:

$ docker container ls

I can allocate pg servide name in name column. In my case docker has named it boilerplate_pg_1. I’ll connect to it by typing:

$ docker exec -it boilerplate_pg_1 /bin/bash

Now, when I’m inside pg container I’ll run psql client to list all databases:

psql -U postgres -W

Password, as per.env file is just a “password”..env file also used by pg service to create boilerplate database and custom script from /db/scripts folder was responsible for creating boilerplatetest database. Lets make sure it’s all according to the plan. Type\l

And sure thing I haveboilerplate andboilerplatetest databases ready to work with.

I hope you have learned something useful. In next post I’ll go through creating actual server and have some routes with middleware and handlers in place. You can also see the whole project here.



Dan Stenger

Software engineer focusing on simplicity and reliability. GO and functional programming enthusiast