How to orchestrate your Django application with Kubernetes

Do you have an application built with Django and PostgreSQL that you’d like to run on Kubernetes?

If so, you’re in luck! In this tutorial, you’ll learn how to orchestrate your Django application with Kubernetes. Since we’re working with multiple microservices, it can be difficult to ensure all parts work together. This tutorial will demystify all that.

To get started, you’ll Dockerize your application, push it to the Docker Hub, then pull it to Kubernetes for orchestration and management. For the Django project, to keep things straight to the point, you’ll build a lead management application. But if you already have an application you’re working with, you can skip that step.

Kubernetes is the most prominent tool used for container orchestration, and many developers learn to use it because of its great value.

Prerequisites

  • Basic understanding of Docker and Kubernetes
  • minikube and Kubectl installed.
  • VirtualBox 5.2 or higher, or the latest Docker

Build a Django application

As mentioned earlier, you’ll build a lead management application. That said, we’ll only be working with the models and using the Django admin to do all the testing. First, we’ll create the Leads model with its required fields then update the admin.py file so that we can access the model we just created on the admin panel.

To get started, create a folder for your project and cd into it. Run the following command to start a project, then start a Django app:

django-admin startproject project . && python3 manage.py startapp app

Add the app you just created to the INSTALLED_APPS in your settings.py file.

INSTALLED_APPS = [
			...
		'app',
]

Update ALLOWED_HOSTS to accept all hosts because Kuberentes generates hosts randomly and they will all connect to the Django application.

...
ALLOWED_HOSTS = ["*"]
...

Now, go to app/models.py and paste the following code to create the model for the Lead manager and initialize its fields. We’ll use the default Django auth system for authentication.

from django.db import models
from django.contrib.auth.models import User

class Lead(models.Model):
    name = models.CharField(max_length=100)
    email = models.EmailField(max_length=100, unique=True)
    details = models.CharField(max_length=500, blank=True)
    owner = models.ForeignKey(
        User, related_name="leads", on_delete=models.CASCADE, null=True)
    created_at = models.DateTimeField(auto_now_add=True)

For your model to show up in your admin panel, paste the following code in app/admin.py:

from django.contrib import admin
from .models import Lead

admin.site.register(Lead)

Use PostgreSQL with Django

Django makes it easy to change the database in that you just need to make some adjustments in the settings.py file. 

First, install django-environ, a tool that will help you conceal your PostgreSQL database details. Then you will update the database dictionary with 'default': env.db() and then put the values in this order psql://<user>:<pass>@127.0.0.1:<database-port>/db-name in a .env file. This will provide your Django application with a database URL that you can use to connect with your PostgreSQL database.

You can do this by inputting the code you have below in the settings.py file:

import environ
import os

environ.Env.read_env(os.path.join(BASE_DIR, '.env'))

env = environ.Env(
    # set casting, default value
    DEBUG=(bool, False)
)
# reading .env file
environ.Env.read_env()

...
DATABASES = {
    # read os.environ['DATABASE_URL'] and raises
    # ImproperlyConfigured exception if not found
    'default': env.db(),

}

Next, create a new file named .env and paste the following text. Update the values there to your preferred values.

# Database Settings
# DATABASE_URL = psql://:@127.0.0.1:/
DATABASE_URL = psql://khabdrick:secure-password@postgres:5432/leads

Dockerize the application and push it to Docker Hub

Docker Hub is a platform where anyone can host their Docker images for sharing or collaboration. In this tutorial, we’re using it to store our Django image so that it can be pulled by Kubernetes. To start using Docker Hub, you have to create an account. Once that’s done, you can follow along with this tutorial.

To Dockerize your Django application, create a file at the root of your project with the name Dockerfile and paste the code below:

FROM python:3.7
# show the stdout and stderr streams right in the command line instead of getting buffered.
ENV PYTHONUNBUFFERED 1
RUN mkdir /django-postgres-kube
WORKDIR /django-postgres-kube
COPY . .

RUN pip install -r requirements.txt

EXPOSE 8000

CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]

In the Dockerfile above, you can see that we are installing dependencies from requirements.txt file, so let’s go ahead and create that. 

Create a new file with the name requirements.txt at the root of your application and paste the text below. psycopg2-binary is the PostgreSQL adapter. It holds the necessary binaries to connect your Django application to PostgreSQL.

Django==3.2
psycopg2-binary==2.9.3
django-environ==0.9.0

Next, authenticate Docker Hub locally by running the following command:

docker login -u username -p password

Now you can build the application by running the following command. This is the pattern that the command follows: docker build -t <dockerhub_username>/<app-name>:<tag> <path/to/docker-file>, so update the command below with your required credentials.

docker build -t khabdrick/leads:ver1 .

Now, push your app to Docker Hub by running the following command:

docker push khabdrick/leads:ver1

Set up minikube

minikube is a tool that is used to create a node locally where you can quickly set up a cluster. To use minikube, you need a driver. In this tutorial, you’ll use VirtualBox or Docker.

You can create a cluster with minikube using VirtualBox by running the following command:

minikube start --driver=virtualbox
set up minikube

If you want to use Docker, run the following command:

minikube start --driver=docker

Set up PostgreSQL on Kubernetes

In this section, you’ll learn how to create the deployment, service, and storage for PostgreSQL on Kubernetes. For this, we’ll pull an available version of PostgreSQL on Docker Hub, authenticate it with the same credentials that were used in the Django application, and then create a pod. You’ll also create a PersistentVolume(PV) and PersistentVolumeClaim(PVC). For storage purposes, you need PV to set aside some storage resources in the cluster for PostgreSQL, and PVC is used to request some storage from the PV.

To get started, create a folder where all the Kubernetes manifest files will live. We’ll create the manifest file for storage first. Create a file named pg-storage.yaml (the name is arbitrary) and paste the following YAML data for PV and PVC:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: postgres-pv
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 100M # total capacity for this persistent volume
  accessModes:
    - ReadWriteOnce
  hostPath: # where pesistent volume is created on the kubernetes node (needs to be /data for minikube)
    path: "/data"
---
apiVersion: v1
kind: PersistentVolumeClaim #claim a portion of persistent volume
metadata:
  labels:
    app: postgres
  name: postgres-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce # mount as read-write by a single node
  resources:
    requests:
      storage: 100M # storage capacity consumed from the persistent volume

To make Kubernetes aware of this file, you need to apply it and indicate that you’re applying a file (-f). You can apply this by navigating to the directory where you stored your manifest file and running the following command:

kubectl apply -f pg-storage.yaml

Now, we’ll create the deployment and the service for PostgreSQL. Kubernetes will pull Docker image postgres:10.3 from Docker Hub and use a base for the deployment. POSTGRES_USER, POSTGRES_PASSWORD, and POSTGRES_DB are required values you use in the Postgres image, and the values you put there must match the ones you used in your Django application. There are extra precautions that can be taken to secure these credentials if you want to go live.

To start, create a file with the name pg-deployment.yaml and paste the data below:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      name: postgres
  template:
    metadata:
      labels:
        name: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:10.3
          ports:
            - containerPort: 5432
          env:
            - name: POSTGRES_USER
              value: "khabdrick"
            - name: POSTGRES_PASSWORD
              value: "secure-password"
            - name: POSTGRES_DB
              value: "leads"
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: postgres-volume-mount
      volumes:
        - name: postgres-volume-mount
          persistentVolumeClaim:
            claimName: postgres-pv-claim
---
# START Service
apiVersion: v1
kind: Service
metadata:
  name: postgres
  labels:
    app: postgres
spec:
  type: ClusterIP
  ports:
    - port: 5432
  selector:
    name: postgres
# END SERVICE

Apply this manifest as you did previously:

kubectl apply -f pg-deployment.yaml

After a few minutes, you can run kubectl get pods, and you’ll see Postgres is running successfully:

postgres running

If yours isn’t running, run the command below and see what’s going on so that you can debug from there:

kubectl describe pod pod-name

Create deployment and services for Django

Here, you’ll create a deployment for our Django application using the Docker image pushed to Docker Hub then develop a service to direct the traffic to the pod.

To start, create a file named django-app.yaml then paste the YAML data below:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: leads
  labels:
    app: leads
spec:
  replicas: 1
  selector:
    matchLabels:
      name: leads
  template:
    metadata:
      labels:
        name: leads
    spec:
      containers:
        - name: leads
          image: khabdrick/leads:ver1
          imagePullPolicy: Always
          ports:
            - containerPort: 8000
          env:
            - name: POSTGRES_USER
              value: "khabdrick"
            - name: POSTGRES_PASSWORD
              value: "secure-password"
            - name: POSTGRES_DB
              value: "leads"
            - name: DATABASE_URL
              value: psql://$(POSTGRES_USER):$(POSTGRES_PASSWORD)@postgres:5432/$(POSTGRES_DB) #postgres here must match the PostgreSQL service name you created earlier

---
# START Service
apiVersion: v1
kind: Service
metadata:
  name: leads
  labels:
    app: leads
spec:
  type: LoadBalancer
  ports:
    - port: 80 #port that the service exposes
      targetPort: 8000 #port that the app is receiving requests from via the pod
  selector:
    name: leads
# END SERVICE

Now, apply this by running the command below:

kubectl apply -f django-app.yaml

You can now test the application using the services. However, the services are internal to minikube, and you won’t be able to access them through the browser. You can counter this by using the port-forward command to direct the internal port to an external port that can be accessed locally. To do that, run the command below:

kubectl port-forward svc/leads 8000:80

You should see something like the image below. Running the command above will provide you with a host and port that can be accessed externally.

kubectl port forward

When you open 127.0.0.1:8000 on your browser, you’ll see that your application is running.

django application running successfully

Run commands in pods

Your application is running, but you haven’t done the migrations for the database or created a superuser. To do this, you need to open an interactive Bash shell so that you can run commands on your pods. The command to create an interactive shell requires the name of the pod, and you can get that by running kubectl get pods.

Open the interactive bash by running the command below:

kubectl exec -it pod-name -- bash

Now, you can run the migration commands by running the following command:

python manage.py makemigrations && python manage.py migrate

Create a superuser by running the command below and filling in the prompt:

python manage.py createsuperuser

Now, you can use the credentials you inputted above to access the admin of your application (http://127.0.0.1:8000/admin/). Once all that’s done, you’ll be able to access the application and use it however you like.

Scale your Django application

When you start having a lot of visitors to your site, you may need to scale your application to make two or more pods serve it. In Kubernetes, you can scale up by editing the number of replicas in the applied deployment to the number of pods you want to handle your application.

You can do this by running the command below, editing the replicas value, and saving it by pressing CTRL+x.

KUBE_EDITOR=nano kubectl edit deployment leads
scale django application

Run the following command, and you’ll see that you now have three pods running your Django application. This means that when one pod is getting a lot of hits, Kubernetes will outsource the workload to other pods.

outsource workload to other pods

Learn more about deploying apps with Kubernetes

In this tutorial, you went from Dockerizing your application to pushing it to the Docker Hub to pulling it to Kubernetes for orchestration alongside PostgreSQL. You also learned how to scale an application to run on several pods.

Take the knowledge you’ve gained here a step further by adding other tools like Redis or Celery. By doing so, you’ll come across new complexities, and you can learn how to manage them effectively.

This blog post was created as part of the Mattermost Community Writing Program and is published under the CC BY-NC-SA 4.0 license. To learn more about the Mattermost Community Writing Program, check this out.

Read more about:

django Docker Kubernetes postgresql

I am a software developer with a passion for technical writing and open source contributions. My area of expertise is full-stack web development and DevOps.