FROM python:3.10-bullseye
RUN apt-get update \
&& apt-get install -y
WORKDIR /web
COPY requirements.txt .
RUN pip install --upgrade pip
RUN pip install --upgrade wheel
RUN pip install -r requirements.txt
COPY . .
Below is a concise, step-by-step explanation of your Dockerfile with the additional details we discussed:
FROM python:3.10-bullseye
- Base Image:
- Uses the official Python 3.10 Docker image based on Debian Bullseye.
- Provides Python 3.10 and a minimal Debian system.
RUN apt-get update \
&& apt-get install -y
- Update & Install System Packages:
- Updates the package lists (
apt-get update
).
- Installs packages specified after
-y
.
- Currently, nothing is listed after
-y
, so this step is effectively a no-op (doing nothing beyond updating package lists).
- You’d typically add system packages here if needed, e.g.,
apt-get install -y curl
.
WORKDIR /web
- Set Working Directory:
- All subsequent commands (
RUN
, COPY
, etc.) run relative to /web
.
- When you later start a container from this image, the default directory is
/web
.
COPY requirements.txt .
- Copy Requirements File:
- Copies
requirements.txt
from the build context (the directory you ran docker build
from) into the container’s /web
folder.
- This is useful before installing Python dependencies so that pip can access the file.
RUN pip install --upgrade pip
RUN pip install --upgrade wheel
RUN pip install -r requirements.txt
- Install Python Dependencies:
- Upgrades
pip
and wheel
to the latest versions.
- Installs all packages listed in
requirements.txt
.
- Having
requirements.txt
copied first allows Docker to cache these layers. If requirements.txt
doesn’t change, Docker reuses that cache.
COPY . .
- Copy All Project Files:
- Copies everything from your local project directory (the build context) into
/web
in the container.
- This includes your app’s Python code, any configuration files, etc.
- By doing this after installing requirements, you only re-run the pip install step if
requirements.txt
changes.
Summary
FROM python:3.10-bullseye
starts with Python 3.10 on Debian.
RUN apt-get update && apt-get install -y
is currently a placeholder for installing system packages.
WORKDIR /web
sets /web
as the default container working directory.
COPY requirements.txt .
and the subsequent pip install
commands install your Python dependencies first—an efficient layering practice.
COPY . .
brings in the rest of your code.
These steps together produce a Docker image ready to run your Python application using Python 3.10 and all dependencies in requirements.txt
.
Below is the docker-compose.yml file followed by a line-by-line explanation of what each section does. This file orchestrates two services—web (your Django app) and postgres (the PostgreSQL database)—along with a named volume for database data persistence.
version: '3'
services:
web:
build:
context: .
working_dir: '/web'
command: >
bash -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8055"
ports:
- '8055:8055'
volumes:
- .:/web/fetch_c1
env_file: .env
depends_on:
- postgres
links:
- postgres
extra_hosts:
- "host.docker.internal:host-gateway"
postgres:
image: postgres:15.1-bullseye
restart: unless-stopped
env_file: .env
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
driver: local
Top-Level Directives
-
version: '3'
- Specifies the Docker Compose file format version.
-
services:
- Defines the individual services (containers) you want to run together (in this case,
web
and postgres
).
-
volumes:
- Declares named volumes that can be shared or persisted among containers (below we have
pgdata
).
web
Service
web:
build:
context: .
working_dir: '/web'
command: >
bash -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8055"
ports:
- '8055:8055'
volumes:
- .:/web/fetch_c1
env_file: .env
depends_on:
- postgres
links:
- postgres
extra_hosts:
- "host.docker.internal:host-gateway"
-
build:
context: .
tells Compose to build the image from the current directory, using the Dockerfile found there.
- This creates a custom image for the
web
service.
-
working_dir: '/web'
- Sets
/web
as the working directory inside the container. Commands like RUN
, CMD
, or any relative paths will use /web
as their base.
-
command: >
- Runs a bash command in sequence:
python manage.py migrate
(applies Django migrations to set up/update the database).
python manage.py runserver 0.0.0.0:8055
(starts Django’s development server on port 8055).
- The
>
signifies a multiline string in YAML, allowing multiple commands joined by &&
.
-
ports:
'8055:8055'
maps container port 8055 to the host’s port 8055, making the Django app accessible at http://localhost:8055
.
-
volumes:
.
: /web/fetch_c1
mounts the current directory (on the host machine) into /web/fetch_c1
inside the container.
- Useful for live development: changes you make locally reflect inside the container without rebuilding the image.
-
env_file: .env
- Loads environment variables (e.g., Django settings, secrets, database credentials) from the
.env
file on your host machine into the container.
-
depends_on:
- postgres
ensures the postgres
service starts before web
. Django tries to connect to the DB once it’s up.
-
links:
- postgres
is an older method for connecting containers by hostname. Compose automatically provides a network, but links
can still enforce a known hostname and legacy compatibility.
-
extra_hosts:
"host.docker.internal:host-gateway"
allows the container to resolve the host machine’s network interface as host.docker.internal
. This is especially handy on Linux, where Docker Desktop’s default DNS mapping might not be present.
postgres
Service
postgres:
image: postgres:15.1-bullseye
restart: unless-stopped
env_file: .env
volumes:
- pgdata:/var/lib/postgresql/data
Volumes
volumes:
pgdata:
driver: local
pgdata
is a named volume using the local driver, which stores the database files on the host system.
- This means that if the
postgres
container is removed, the data remains in the pgdata
volume.
How It All Works Together
- Two Services:
web
(Django) and postgres
(database).
- Automatic Networking: Compose creates a network so
web
can talk to postgres
by service name.
- Persistence: Postgres data is stored in the
pgdata
volume, so data remains intact.
- Environment Variables: Both containers read from
.env
, centralizing sensitive credentials/config.
- Local Development: The local directory is mounted into the
web
container, so you can edit code on the host and see updates instantly.
This setup makes it easy to run your Django app and Postgres database together in a reproducible environment—no manual config needed for linking containers, exposing ports, or managing data.
Docker Compose is a tool for defining and running multi-container Docker applications.
It lets you manage the configuration of several containers (services) in one file, rather than remembering long docker run
or docker build
commands for each container.
What If We Didn’t Use Docker Compose?
Why This Is Less Convenient
- Multiple Commands: You have to run (and remember) separate, often lengthy commands for each container.
- No Shared Configuration: Each container’s ports, volumes, env variables, and links are scattered across separate commands.
- Orchestration: If you need to bring everything up or down, you must do so manually for each container.
Benefits of Docker Compose
- Single File: All your service configurations—ports, volumes, environment variables—are in one
docker-compose.yml
.
- One Command: You can run
docker-compose up -d
to start everything, or docker-compose down
to stop and remove containers.
- Synchronization: Compose handles container dependency ordering (e.g.,
depends_on: postgres
) and sets up a shared network automatically.
- Scalability: You can scale services (e.g., multiple web containers) with a single command.
So, while you could manually build and run each container with plain Docker commands, it becomes cumbersome—especially as the application grows or you have more containers. Docker Compose makes multi-container setups simpler and more maintainable.