Deployment
Standalone Worker
For traditional deployments, run one or more standalone worker processes. This is the most robust and scalable option.
# Run with default settings
python manage.py runworker
# Run with specific concurrency and queues
python manage.py runworker --concurrency 100 --queue high_priority --queue default
# Run with scheduler enabled
python manage.py runworker --scheduler
# Run a worker for a batch queue
python manage.py runworker --queue=batch_queue
Scaling
It's safe to run multiple standalone worker instances. Both backends use atomic operations to prevent multiple workers from picking up the same task:
- Postgres:
SELECT ... FOR UPDATE SKIP LOCKED - Valkey:
BLMOVE
Each worker has a unique ID. If a worker process is terminated uncleanly, its in-process tasks will be abandoned. On startup, a worker will attempt to rescue any tasks that were previously abandoned by a worker with the same ID.
Health Checks (Kubernetes)
The worker can report its status by updating a file's modification time every 5 seconds. This is useful for Liveness Probes.
python manage.py runworker --health-check-file /tmp/worker_health
Kubernetes Liveness Probe:
livenessProbe:
exec:
command:
- /bin/sh
- -c
- 'test -f /tmp/worker_health && [ $(($(date +%s) - $(stat -c %Y /tmp/worker_health))) -lt 15 ]'
initialDelaySeconds: 10
periodSeconds: 10
Embedded Worker (All-in-One)
For simpler deployments, run the worker inside your ASGI web server's event loop. This reduces the number of processes you need to manage.
1. Create an Embedded ASGI Entrypoint
# myproject/asgi_embedded.py
import os
from django.core.asgi import get_asgi_application
from django_vtasks.asgi import get_worker_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")
# Get the standard Django ASGI application
django_asgi_app = get_asgi_application()
# Wrap it with the worker application
application = get_worker_application(django_asgi_app)
2. Run with an ASGI Server
Use any ASGI-compliant server that supports the lifespan protocol:
# Example with Granian
granian --interface asgi myproject.asgi_embedded:application --host 0.0.0.0 --port 8000
Scaling Embedded Workers
You can run multiple instances of the embedded configuration. Each web server process has its own worker, and they coordinate through the shared backend (Postgres or Valkey).
Metrics
Standalone Worker
Enable the metrics server:
python manage.py runworker --metrics-port 9100
Metrics are available at http://localhost:9100/.
Embedded Worker
When running in embedded mode, the worker shares the same process as your web server. All metrics are exposed via your application's standard metrics endpoint (e.g., /metrics provided by django-prometheus).
Do not use --metrics-port in embedded mode.
Memory Optimization
Reduce worker memory by removing unneeded INSTALLED_APPS:
# settings.py
if os.environ.get("VTASKS_IS_WORKER") == "true":
INSTALLED_APPS = prune_installed_apps(INSTALLED_APPS)
ROOT_URLCONF = "django_vtasks.empty_urls" # Omit if tasks require "reverse"
Set VTASKS_IS_WORKER=true in your worker's environment variables.
Reliability
Valkey Reliable Queue Pattern
When using Valkey, django-vtasks implements the Reliable Queue Pattern:
- Worker waits for a task
- Task is atomically moved from
q:defaulttoprocessing:<worker_id>viaBLMOVE - Task is processed and acknowledged (removed from processing list)
If a worker crashes (OOM kill, power failure), the task remains in its processing: list. On the next startup, the same worker (or a new one with the same ID) rescues the task and moves it back to the main queue.
Database Backend
The Database backend uses SELECT ... FOR UPDATE SKIP LOCKED for safe concurrent processing. Failed tasks are moved to a Dead Letter Queue for inspection.
Example Docker Compose
version: '3.8'
services:
web:
build: .
command: granian --interface asgi myproject.asgi:application --host 0.0.0.0 --port 8000
ports:
- "8000:8000"
depends_on:
- db
- valkey
worker:
build: .
command: python manage.py runworker --scheduler --health-check-file /tmp/health
environment:
- VTASKS_CONCURRENCY=50
depends_on:
- db
- valkey
healthcheck:
test: ["CMD", "sh", "-c", "test -f /tmp/health && [ $(($(date +%s) - $(stat -c %Y /tmp/health))) -lt 15 ]"]
interval: 10s
timeout: 5s
retries: 3
db:
image: postgres:16
environment:
POSTGRES_DB: myapp
POSTGRES_USER: myapp
POSTGRES_PASSWORD: secret
valkey:
image: valkey/valkey:7.2
Example Kubernetes Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: vtasks-worker
spec:
replicas: 3
selector:
matchLabels:
app: vtasks-worker
template:
metadata:
labels:
app: vtasks-worker
spec:
containers:
- name: worker
image: myapp:latest
command: ["python", "manage.py", "runworker", "--scheduler", "--health-check-file", "/tmp/health"]
env:
- name: VTASKS_CONCURRENCY
value: "50"
livenessProbe:
exec:
command:
- /bin/sh
- -c
- 'test -f /tmp/health && [ $(($(date +%s) - $(stat -c %Y /tmp/health))) -lt 15 ]'
initialDelaySeconds: 10
periodSeconds: 10
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"