Setting up a CI workflow with GitLab can take some time up front, but you'll be happy you did it when you need to push some updates. I have everything running on a personal server in my home and I found a lot of examples, but none of them worked for me. Most involved cloud solutions and I'm trying to move off of Digital Ocean (save me that $5) so here is how I set it up.
The Dockerfile
FROM python:3.8-slim-buster
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
WORKDIR /code
COPY requirements.txt /code/
RUN buildDeps='gcc' \
&& set -x \
&& apt-get update && apt-get install -y $buildDeps --no-install-recommends \
&& apt-get install -y libc6-dev --no-install-recommends \
&& rm -rf /var/lib/apt/lists/* \
&& pip install --upgrade pip setuptools wheel \
&& pip install -r requirements.txt \
&& apt-get purge -y --auto-remove $buildDeps
COPY . /code/
EXPOSE 8000
# Tell uWSGI where to find your wsgi file (change this):
ENV UWSGI_WSGI_FILE=existence_undefined/wsgi.py
# Base uWSGI configuration (you shouldn't need to change these):
ENV UWSGI_HTTP=:8000 UWSGI_MASTER=1 UWSGI_HTTP_AUTO_CHUNKED=1 UWSGI_HTTP_KEEPALIVE=1 UWSGI_LAZY_APPS=1 UWSGI_WSGI_ENV_BEHAVIOR=holy
# Number of uWSGI workers and threads per worker (customize as needed):
ENV UWSGI_WORKERS=2 UWSGI_THREADS=4
# uWSGI static file serving configuration (customize or comment out if not needed):
ENV UWSGI_STATIC_MAP="/static/=/www/some/dir/static/" UWSGI_STATIC_EXPIRES_URI="/static/.*\.[a-f0-9]{12,}\.(css|js|png|jpg|jpeg|gif|ico|woff|ttf|otf|svg|scss|map|txt) 315360000"
ENV UWSGI_STATIC_MAP="/media/=/www/some/dir/media/" UWSGI_STATIC_EXPIRES_URI="/media/.*\.[a-f0-9]{12,}\.(css|js|png|jpg|jpeg|gif|ico|woff|ttf|otf|svg|scss|map|txt) 315360000"
# Deny invalid hosts before they get to Django (uncomment and change to your hostname(s)):
# ENV UWSGI_ROUTE_HOST="^(?!localhost:8000$) break:400"
# Change to a non-root user
USER ${APP_USER}:${APP_USER}
# Uncomment after creating your docker-entrypoint.sh
# ENTRYPOINT ["/code/docker-entrypoint.sh"]
# Start uWSGI
CMD ["uwsgi", "--show-config"]
I started with 3.8-slim-buster
because it's easier to get up and running with Django and Postgres than apline (and it often ends up being a smaller image). The next two lines do the following:
PYTHONDONTWRITEBYTECODE:
If this is set to a non-empty string, Python won’t try to write .pyc files on the import of source modules.
PYTHONUNBUFFERED:
Force the stdout and stderr streams to be unbuffered. This option has no effect on the stdin stream.
Next, I create the working directory code
and copy the requirements.txt
file.
WORKDIR /code
COPY requirements.txt /code/
The next section updates and installs required libraries (gcc, libc6-dev, requirements.txt, etc.) then removes the buildDeps
gcc
and libc6-dev
to keep the image size small. They are all in the same RUN
command because each RUN
line adds a layer to the image.
WORKDIR /code
COPY requirements.txt /code/
RUN buildDeps='gcc libc6-dev' \
&& set -x \
&& apt-get update && apt-get install -y $buildDeps --no-install-recommends \
&& rm -rf /var/lib/apt/lists/* \
&& pip install --upgrade pip setuptools wheel \
&& pip install -r requirements.txt \
&& apt-get purge -y --auto-remove $buildDeps
Now it's time to copy the code and expose your port:
COPY . /code/
EXPOSE 8000
Warning: Use a docker network to avoid exposing ports.
Next, is setting up uWSGI. That section is commented so I won't explain it here. See the comments in the Dockerfile and ask if you have any questions.
The production docker-compose file
version: '3'
services:
web:
image: "${WEB_IMAGE}"
volumes:
- /www/some/dir/static/:/www/some/dir/static/
- /www/some/dir/media/:/www/some/dir/media/
depends_on:
- db
environment:
- DATABASE_HOST=db
env_file:
- ../../../some/dir/myenv.env
- .env
db:
image: postgres:12.0-alpine
restart: always
volumes:
- /www/some/dir/postgres/eu/:/var/lib/postgresql/data/
- /www/some/dir/postgres/backup/eu:/home/bak/
env_file:
- ../../../some/dir/myenv.env
nginx:
build: ./nginx
ports:
- 8XXX:80
depends_on:
- web
volumes:
- /www/some/dir/static/:/www/some/dir/static/
- /www/some/dir/media/:/www/some/dir/media/
Note, instead of exposing ports, I should be using a docker network. That's on the to do list.
web
The service web
gets it's image from Gitlab, so set it to the environment variable ${WEB_IMAGE}
:
image: "${WEB_IMAGE}"
The static and media folders are mapped to host volumes so they aren't lost on when the container is spun down:
volumes:
- /www/some/dir/static/:/www/some/dir/static/
- /www/some/dir/media/:/www/some/dir/media/
The web container depends on the db (postgres) container:
depends_on:
- db
I suggest using environment variable or files for security reasons. I'm using env files here and myenv.env
has my database info, secret key, etc. and .env
is used for GitLab deployment:
env_file:
- ../../../some/dir/myenv.env
- .env
For example, my settings.py
:
SECRET_KEY = os.environ.get("SECRET_KEY", '7fe84r6@14!o1111111111111111111111111111^bq)c^')
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': os.getenv('DATABASE_NAME', 'some_name'),
'USER': os.getenv('DATABASE_USER', 'some_user'),
'PASSWORD': os.getenv('DATABASE_PASSWORD', None),
'HOST': os.getenv('DATABASE_HOST', 'db'),
'PORT': os.getenv('DATABASE_PORT', 5432),
}
}
and my myenv.env file:
SECRET_KEY=^1111111111111111111111@Y=P$-CU&Rnjerg9
DJANGO_SETTINGS_MODULE=existence_undefined.settings.default
POSTGRES_USER=some_special_user
POSTGRES_PASSWORD=some_super_password
POSTGRES_DB=some_super_db
DATABASE_NAME=some_super_db
DATABASE_USER=some_super_user
DATABASE_PASSWORD=some_super_password
db
Here I will use the alpine Postgres 12 image:
image: postgres:12.0-alpine
restart: always
With the folling mapped volumes (one for the postgres data and one for db backups:
volumes:
- /www/some/dir/postgres/eu/:/var/lib/postgresql/data/
- /www/some/dir/postgres/backup/eu:/home/bak/
And the same env
file as the web
service:
env_file:
- ../../../some/dir/myenv.env
nginx
Nginx needs to be built as well (since it needs custom nginx.conf
and uwsgi_params
files), so it has it's own Dockerfile:
Nginx Dockerfile
FROM nginx:1.17.4-alpine
EXPOSE 80
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
COPY nginx.conf /etc/nginx/sites-enabled/
RUN mkdir -p /etc/nginx/uwsgi/
COPY uwsgi_params /etc/nginx/uwsgi/
nginx.conf
Customize site name, Django port (if not the exposed 8000 above) and static/media directories.
# mysite_nginx.conf
# the upstream component nginx needs to connect to
upstream my_random_site_name {
# server unix:///path/to/your/mysite/mysite.sock; # for a file socket
server web:8000; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name localhost; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /www/some/dir/media; # your Django project's media files - amend as required
}
location /static {
alias /www/some/dir/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
proxy_pass http://my_random_site_name;
include /etc/nginx/uwsgi/uwsgi_params; # the uwsgi_params file you installed
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
uwsgi_param file
This one just needs to be there for some reason, no customization required.
uwsgi_param QUERY_STRING $query_string;
uwsgi_param REQUEST_METHOD $request_method;
uwsgi_param CONTENT_TYPE $content_type;
uwsgi_param CONTENT_LENGTH $content_length;
uwsgi_param REQUEST_URI $request_uri;
uwsgi_param PATH_INFO $document_uri;
uwsgi_param DOCUMENT_ROOT $document_root;
uwsgi_param SERVER_PROTOCOL $server_protocol;
uwsgi_param REQUEST_SCHEME $scheme;
uwsgi_param HTTPS $https if_not_empty;
uwsgi_param REMOTE_ADDR $remote_addr;
uwsgi_param REMOTE_PORT $remote_port;
uwsgi_param SERVER_PORT $server_port;
uwsgi_param SERVER_NAME $server_name;
Nginx docker-compose section
Make sure the 3 files above are in a folder in the root directory, i.e. ./nginx
.
The docker-compose file is to build the nginx image:
build: ./nginx
Map whatever port you want to access the site from to nginx port 80 (if this is your only site, you probably want 80:80
unless you are using a separate proxy).
ports:
- 8XXX:80
It depends on web to forward traffic:
depends_on:
- web
Serve the static files with nginx:
volumes:
- /www/some/dir/static/:/www/some/dir/static/
- /www/some/dir/media/:/www/some/dir/media/
Now, you can test it out with:
docker build -t $(CONTAINER_NAME) . # build web
docker-compose build # build nginx
docker-compose run -d web python3 manage.py collectstatic --no-input # collect static files
Then
docker-compose up # start the service
You should be able to go to 127.0.0.1:80
(or whatever port you mapped in nginx to see the site.
Make sure this works before continuing.
Note the period at the end of the
docker build
command
GitLab CI
To setup the GitLab CI, first create a gitlab-ci.yml
file in your root directory. The boiler plate stuff is as follows:
image:
name: docker/compose:1.25.4
entrypoint: [""]
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
This allows GitLab to know what one of their container types you are going to build with (that way your build is in its own environernment).
Next, set what stages you plan to run:
stages:
- build
- test
- deploy
Now we need a before_script
to setup the variables we will be using:
before_script:
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE:latest
- apk add --no-cache openssh-client bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login -u $CI_REGISTRY_USER -p $CI_BUILD_TOKEN $CI_REGISTRY
The setup_env.sh
creates a .env
file has your login GitLab container registry information. I'm not a big fan of passing this over a network and would like to find a different solution. The file looks like:
#!/bin/sh
echo CI_REGISTRY_USER=$CI_REGISTRY_USER >> .env
echo CI_JOB_TOKEN=$CI_JOB_TOKEN >> .env
echo CI_REGISTRY=$CI_REGISTRY >> .env
echo IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME >> .env
echo WEB_IMAGE=$IMAGE:latest >> .env
Next, we will build our image and push it to GitLab's container registry:
build:
stage: build
script:
- docker pull $IMAGE:latest || true
- docker build -t $IMAGE .
- docker push $IMAGE:latest
Now, the fun part, deploying...
deploy:
stage: deploy
script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_PRIVATE_KEY" | ssh-add -
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- ssh-keyscan -H 'gitlab.com' >> ~/.ssh/known_hosts
- chmod +x ./deploy.sh
- scp -o StrictHostKeyChecking=no -r ./.env ./nginx ./docker-compose.prod.yml <username>@<ip>:$WEB_DIR/<sub_dir>
- ./deploy.sh
only:
- prod
Where deploy.sh
looks like:
#!/bin/sh
ssh -o StrictHostKeyChecking=no username@ipaddress << 'ENDSSH'
cd web/eu
export $(cat .env | xargs)
docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
docker pull $IMAGE:latest
docker-compose -f docker-compose.prod.yml run -d web python3 manage.py migrate
docker-compose -f docker-compose.prod.yml build -d
docker-compose -f docker-compose.prod.yml run -d web python3 manage.py collectstatic --no-input
docker-compose -f docker-compose.prod.yml down -d --remove-orphans
docker-compose -f docker-compose.prod.yml up -d
ENDSSH
Now when you push code to GitLab, you should see the CI/CD pipeline kickoff and automatically deploy!