#Django部署最佳实践 - 生产环境部署与运维
📂 所属阶段:第三部分 — 高级主题
🎯 难度等级:高级
⏰ 预计学习时间:6-8小时
🎒 前置知识:性能优化, 安全最佳实践
#目录
#部署概述
#部署架构
"""
典型的Django生产环境架构:
Internet
↓
Load Balancer (如:AWS ELB, Nginx)
↓
Reverse Proxy (Nginx)
↓
Application Servers (Gunicorn Workers)
↓
Database Cluster (PostgreSQL/MySQL)
↓
Cache Service (Redis/Memcached)
↓
Message Queue (RabbitMQ/Celery)
↓
File Storage (S3/MinIO)
组件职责:
- Nginx: 静态文件服务、负载均衡、SSL终止
- Gunicorn: WSGI应用服务器
- PostgreSQL: 数据存储
- Redis: 缓存、会话存储、消息队列
- Celery: 异步任务处理
"""#部署原则
"""
Django部署核心原则:
1. 安全第一 (Security First)
- HTTPS强制
- 最小权限原则
- 安全配置验证
2. 高可用性 (High Availability)
- 多实例部署
- 故障转移
- 负载均衡
3. 可扩展性 (Scalability)
- 水平扩展能力
- 资源隔离
- 微服务友好
4. 监控完善 (Comprehensive Monitoring)
- 性能监控
- 错误追踪
- 日志收集
5. 自动化 (Automation)
- CI/CD流水线
- 容器化部署
- 配置管理
"""#环境区分
"""
环境类型及其特点:
Development (开发环境)
├── DEBUG = True
├── 本地数据库
├── 无生产数据
└── 详细的错误信息
Staging (预发布环境)
├── 生产配置副本
├── 模拟生产数据
├── 相似的硬件配置
└── 用于测试部署
Production (生产环境)
├── DEBUG = False
├── 优化的性能配置
├── 真实用户数据
└── 严格的安全措施
各环境配置差异:
- 数据库连接
- 缓存配置
- 邮件设置
- 第三方服务凭证
- 性能优化参数
"""#环境准备
#服务器环境配置
#!/bin/bash
# 服务器初始化脚本
# 更新系统包
sudo apt update && sudo apt upgrade -y
# 安装必要的系统依赖
sudo apt install -y \
python3 \
python3-pip \
python3-dev \
python3-venv \
build-essential \
libssl-dev \
libffi-dev \
libpq-dev \
postgresql-client \
redis-tools \
nginx \
supervisor \
git \
curl \
wget \
vim \
htop
# 创建应用用户
sudo useradd --system --home /app --shell /bin/bash app
# 创建应用目录
sudo mkdir -p /app /var/log/myapp /var/run/myapp
sudo chown -R app:app /app /var/log/myapp /var/run/myapp
# 设置防火墙
sudo ufw allow ssh
sudo ufw allow 'Nginx Full'
sudo ufw --force enable
echo "服务器环境配置完成"#Python环境配置
#!/bin/bash
# Python环境配置脚本
# 创建虚拟环境
cd /app
python3 -m venv venv
source venv/bin/activate
# 升级pip
pip install --upgrade pip setuptools wheel
# 安装生产依赖
pip install gunicorn psycopg2-binary redis celery pillow
# 验证安装
python -c "import django; print(f'Django version: {django.VERSION}')"
echo "Python环境配置完成"#环境变量管理
# 环境变量配置
import os
from decouple import config # pip install python-decouple
from pathlib import Path
BASE_DIR = Path(__file__).resolve().parent.parent
# 基础配置
DEBUG = config('DEBUG', default=False, cast=bool)
SECRET_KEY = config('SECRET_KEY', default='your-secret-key-here')
ALLOWED_HOSTS = config('ALLOWED_HOSTS', default='localhost').split(',')
# 数据库配置
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': config('DB_NAME', default='myapp'),
'USER': config('DB_USER', default='myappuser'),
'PASSWORD': config('DB_PASSWORD'),
'HOST': config('DB_HOST', default='localhost'),
'PORT': config('DB_PORT', default='5432'),
'CONN_MAX_AGE': config('DB_CONN_MAX_AGE', default=60, cast=int),
}
}
# 缓存配置
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': config('REDIS_URL', default='redis://127.0.0.1:6379/1'),
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
},
'KEY_PREFIX': 'myapp',
'TIMEOUT': 300,
}
}
# 邮件配置
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = config('EMAIL_HOST', default='localhost')
EMAIL_PORT = config('EMAIL_PORT', default=587, cast=int)
EMAIL_USE_TLS = config('EMAIL_USE_TLS', default=True, cast=bool)
EMAIL_HOST_USER = config('EMAIL_HOST_USER')
EMAIL_HOST_PASSWORD = config('EMAIL_HOST_PASSWORD')
# AWS S3配置(可选)
if config('USE_S3', default=False, cast=bool):
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
AWS_ACCESS_KEY_ID = config('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = config('AWS_SECRET_ACCESS_KEY')
AWS_STORAGE_BUCKET_NAME = config('AWS_STORAGE_BUCKET_NAME')
AWS_S3_REGION_NAME = config('AWS_S3_REGION_NAME', default='us-east-1')
AWS_S3_CUSTOM_DOMAIN = f'{AWS_STORAGE_BUCKET_NAME}.s3.amazonaws.com'#Docker容器化部署
#基础Docker配置
# Dockerfile
FROM python:3.11-slim
# 设置环境变量
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ENV DJANGO_SETTINGS_MODULE=myproject.settings.production
# 安装系统依赖
RUN apt-get update && apt-get install -y \
gcc \
postgresql-client \
libpq-dev \
libjpeg-dev \
zlib1g-dev \
&& rm -rf /var/lib/apt/lists/*
# 创建应用目录
WORKDIR /app
# 复制依赖文件
COPY requirements.txt .
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir -r requirements.txt
# 复制应用代码
COPY . .
# 创建非root用户
RUN useradd --create-home --shell /bin/bash app && \
chown -R app:app /app
USER app
# 收集静态文件
RUN python manage.py collectstatic --noinput
# 暴露端口
EXPOSE 8000
# 启动命令
CMD ["gunicorn", "--config", "gunicorn.conf.py", "myproject.wsgi:application"]# requirements.txt
Django>=4.2,<5.0
gunicorn==21.2.0
psycopg2-binary==2.9.7
redis==5.0.1
celery==5.3.4
django-redis==5.4.0
Pillow==10.0.1
python-decouple==3.8
whitenoise==6.6.0#Docker Compose配置
# docker-compose.yml
version: '3.8'
services:
db:
image: postgres:15
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
networks:
- app-network
restart: unless-stopped
redis:
image: redis:7-alpine
command: redis-server --appendonly yes
volumes:
- redis_data:/data
networks:
- app-network
restart: unless-stopped
web:
build: .
ports:
- "8000:8000"
environment:
- DB_HOST=db
- REDIS_URL=redis://redis:6379/0
depends_on:
- db
- redis
volumes:
- static_volume:/app/static
- media_volume:/app/media
networks:
- app-network
restart: unless-stopped
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/conf.d:/etc/nginx/conf.d
- static_volume:/app/static
- media_volume:/app/media
- ./ssl:/etc/ssl
depends_on:
- web
networks:
- app-network
restart: unless-stopped
celery:
build: .
command: celery -A myproject worker --loglevel=info
environment:
- DB_HOST=db
- REDIS_URL=redis://redis:6379/0
depends_on:
- db
- redis
volumes:
- static_volume:/app/static
- media_volume:/app/media
networks:
- app-network
restart: unless-stopped
celery-beat:
build: .
command: celery -A myproject beat --loglevel=info --scheduler django_celery_beat.schedulers:DatabaseScheduler
environment:
- DB_HOST=db
- REDIS_URL=redis://redis:6379/0
depends_on:
- db
- redis
volumes:
- static_volume:/app/static
- media_volume:/app/media
networks:
- app-network
restart: unless-stopped
volumes:
postgres_data:
redis_data:
static_volume:
media_volume:
networks:
app-network:
driver: bridge#Gunicorn配置
# gunicorn.conf.py
import multiprocessing
# Server socket
bind = "0.0.0.0:8000"
backlog = 2048
# Worker processes
workers = multiprocessing.cpu_count() * 2 + 1
worker_class = "sync"
worker_connections = 1000
max_requests = 1000
max_requests_jitter = 100
timeout = 30
keepalive = 2
max_keepalive_requests = 100
max_keepalive_time = 2
# Security
limit_request_line = 4094
limit_request_fields = 100
limit_request_field_size = 8190
# Debugging
preload_app = True
daemon = False
raw_env = [
'DJANGO_SETTINGS_MODULE=myproject.settings.production',
]
# Logging
accesslog = "/var/log/myapp/gunicorn_access.log"
errorlog = "/var/log/myapp/gunicorn_error.log"
loglevel = "info"
access_log_format = '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s" %(D)s'
# Process naming
proc_name = 'myapp_gunicorn'
# Server mechanics
pidfile = '/var/run/myapp/gunicorn.pid'
user = 'app'
group = 'app'
tmp_upload_dir = None
# SSL
keyfile = None
certfile = None#Docker安全最佳实践
# 安全增强的Dockerfile
FROM python:3.11-slim
# 创建非root用户(在安装依赖之前)
RUN groupadd -r appgroup && useradd -r -g appgroup app
# 安装系统依赖
RUN apt-get update && apt-get install -y \
gcc \
postgresql-client \
libpq-dev \
libjpeg-dev \
zlib1g-dev \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
# 设置工作目录并更改权限
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir -r requirements.txt
# 复制应用代码
COPY . .
# 更改文件所有者
RUN chown -R app:app /app
# 切换到非root用户
USER app
# 收集静态文件
RUN python manage.py collectstatic --noinput
# 使用非特权端口
EXPOSE 8000
# 使用exec格式的CMD(更安全)
CMD ["gunicorn", "--config", "gunicorn.conf.py", "myproject.wsgi:application"]#Nginx配置
#基础Nginx配置
# nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# 日志格式
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" "$request_time"';
access_log /var/log/nginx/access.log main;
# 基础配置
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 16M;
# Gzip压缩
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_comp_level 6;
gzip_types
text/plain
text/css
text/xml
text/javascript
application/json
application/javascript
application/xml+rss
application/atom+xml
image/svg+xml;
# 包含站点配置
include /etc/nginx/conf.d/*.conf;
}#Django应用Nginx配置
# nginx/conf.d/myapp.conf
upstream django {
server web:8000; # 指向Docker容器中的Django应用
}
server {
listen 80;
server_name yourdomain.com www.yourdomain.com;
# 强制HTTPS重定向
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name yourdomain.com www.yourdomain.com;
# SSL证书配置
ssl_certificate /etc/ssl/certs/your-cert.pem;
ssl_certificate_key /etc/ssl/private/your-key.pem;
# SSL安全配置
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# 安全头
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;
# 静态文件处理
location /static/ {
alias /app/static/;
expires 1y;
add_header Cache-Control "public, immutable";
# 启用gzip压缩
gzip_static on;
gzip_vary on;
}
# 媒体文件处理
location /media/ {
alias /app/media/;
expires 30d;
add_header Cache-Control "public";
# 防止执行脚本文件
location ~* \.(php|pl|py|jsp|asp|sh|cgi)$ {
deny all;
return 404;
}
}
# API请求处理
location /api/ {
proxy_pass http://django;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $server_name;
# 超时设置
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# 缓冲设置
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
}
# 主应用处理
location / {
proxy_pass http://django;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $server_name;
# 超时和缓冲设置
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering on;
# 上传文件大小限制
client_max_body_size 16M;
}
# 健康检查端点
location /health/ {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
# 隐藏Nginx版本
server_tokens off;
# 安全限制
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
}#Nginx安全配置
# nginx/conf.d/security.conf
# 隐藏版本信息
server_tokens off;
# 防止点击劫持
add_header X-Frame-Options "SAMEORIGIN" always;
# 防止MIME类型嗅探
add_header X-Content-Type-Options "nosniff" always;
# XSS保护
add_header X-XSS-Protection "1; mode=block" always;
# HSTS (HTTP Strict Transport Security)
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# 内容安全策略
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self' data:;" always;
# Referrer策略
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# 防止恶意请求
location ~* \.(htaccess|htpasswd|ini|log|sh|sql|conf)$ {
deny all;
return 404;
}
# 限制请求频率
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;
location /api/ {
limit_req zone=api burst=20 nodelay;
# ... 其他配置
}
location /login/ {
limit_req zone=login burst=5 nodelay;
# ... 其他配置
}#Gunicorn部署
#Gunicorn高级配置
# advanced_gunicorn.conf.py
import multiprocessing
import os
# 服务器配置
bind = ["0.0.0.0:8000"]
backlog = 2048
# 工作进程配置
workers = multiprocessing.cpu_count() * 2 + 1
worker_class = "gevent" # 或 "eventlet" 用于异步支持
worker_connections = 1000
max_requests = 1000
max_requests_jitter = 100
timeout = 30
graceful_timeout = 30
keepalive = 2
max_keepalive_requests = 100
max_keepalive_time = 2
# 资源限制
limit_request_line = 4094
limit_request_fields = 100
limit_request_field_size = 8190
# 预加载应用(提高性能,但不利于热重载)
preload_app = True
# 守护进程配置
daemon = False
pidfile = '/var/run/myapp/gunicorn.pid'
user = 'app'
group = 'app'
# 日志配置
accesslog = '/var/log/myapp/gunicorn_access.log'
errorlog = '/var/log/myapp/gunicorn_error.log'
loglevel = 'info'
access_log_format = (
'%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s '
'"%(f)s" "%(a)s" %(D)s %(p)s'
)
# 应用配置
raw_env = [
'DJANGO_SETTINGS_MODULE=myproject.settings.production',
]
# 工作进程生命周期
max_requests = 1000
max_requests_jitter = 100
reload = False
# 服务器性能优化
worker_tmp_dir = '/dev/shm' # 使用内存临时目录#Supervisor配置
# /etc/supervisor/conf.d/myapp.conf
[program:myapp_web]
command=/app/venv/bin/gunicorn --config gunicorn.conf.py myproject.wsgi:application
directory=/app
user=app
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/myapp/web.log
environment=DJANGO_SETTINGS_MODULE="myproject.settings.production"
[program:myapp_celery_worker]
command=/app/venv/bin/celery -A myproject worker --loglevel=info
directory=/app
user=app
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/myapp/celery_worker.log
environment=DJANGO_SETTINGS_MODULE="myproject.settings.production"
[program:myapp_celery_beat]
command=/app/venv/bin/celery -A myproject beat --loglevel=info --scheduler django_celery_beat.schedulers:DatabaseScheduler
directory=/app
user=app
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/myapp/celery_beat.log
environment=DJANGO_SETTINGS_MODULE="myproject.settings.production"
[group:myapp]
programs=myapp_web,myapp_celery_worker,myapp_celery_beat
priority=999#Gunicorn性能调优
# gunicorn_tuning.py
"""
Gunicorn性能调优指南:
1. 工作进程数 (workers)
- 公式: (2 × CPU核心数) + 1
- 过多会导致上下文切换开销
- 过少无法充分利用CPU
2. 工作进程类型 (worker_class)
- sync: 同步阻塞,默认类型
- gevent: 协程,适合I/O密集型
- eventlet: 协程,类似gevent
3. 连接数 (worker_connections)
- gevent/eventlet: 设置为1000+
- sync: 通常不需要调整
4. 请求限制 (max_requests, max_requests_jitter)
- 防止内存泄漏
- 随机抖动避免同时重启
5. 超时设置 (timeout, graceful_timeout)
- timeout: 请求处理超时
- graceful_timeout: 优雅关闭超时
"""#数据库部署
#PostgreSQL生产配置
-- postgresql.conf (生产环境配置)
# 连接设置
listen_addresses = '*'
port = 5432
max_connections = 100
superuser_reserved_connections = 3
# 内存设置
shared_buffers = 256MB
effective_cache_size = 1GB
work_mem = 4MB
maintenance_work_mem = 64MB
# WAL设置
wal_level = replica
fsync = on
synchronous_commit = on
wal_sync_method = fsync
full_page_writes = on
# 检查点设置
checkpoint_segments = 32
checkpoint_completion_target = 0.9
checkpoint_warning = 30s
# 统计设置
track_counts = on
track_functions = all
stats_temp_directory = 'pg_stat_tmp'
# 查询规划器设置
random_page_cost = 1.1
effective_io_concurrency = 200
# 日志设置
log_destination = 'stderr'
logging_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_rotation_age = 1d
log_rotation_size = 100MB
log_min_duration_statement = 1000
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
log_temp_files = 0#数据库备份策略
# backup_manager.py
import subprocess
import os
import datetime
from django.conf import settings
import logging
logger = logging.getLogger(__name__)
class DatabaseBackupManager:
"""数据库备份管理器"""
def __init__(self):
self.db_name = settings.DATABASES['default']['NAME']
self.db_user = settings.DATABASES['default']['USER']
self.db_host = settings.DATABASES['default']['HOST']
self.backup_dir = '/backups'
def create_backup(self):
"""创建数据库备份"""
timestamp = datetime.datetime.now().strftime('%Y%m%d_%H%M%S')
backup_file = f"{self.backup_dir}/{self.db_name}_backup_{timestamp}.sql"
try:
# 创建备份目录
os.makedirs(self.backup_dir, exist_ok=True)
# 执行备份命令
cmd = [
'pg_dump',
'-h', self.db_host,
'-U', self.db_user,
'-d', self.db_name,
'-f', backup_file
]
env = os.environ.copy()
env['PGPASSWORD'] = settings.DATABASES['default']['PASSWORD']
result = subprocess.run(cmd, env=env, capture_output=True, text=True)
if result.returncode == 0:
logger.info(f"数据库备份成功: {backup_file}")
# 压缩备份文件
self.compress_backup(backup_file)
# 清理旧备份
self.cleanup_old_backups()
return backup_file
else:
logger.error(f"数据库备份失败: {result.stderr}")
return None
except Exception as e:
logger.error(f"备份过程中发生错误: {str(e)}")
return None
def compress_backup(self, backup_file):
"""压缩备份文件"""
import gzip
import shutil
compressed_file = f"{backup_file}.gz"
with open(backup_file, 'rb') as f_in:
with gzip.open(compressed_file, 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
# 删除原始备份文件
os.remove(backup_file)
logger.info(f"备份文件已压缩: {compressed_file}")
def cleanup_old_backups(self, days_to_keep=7):
"""清理旧备份"""
import glob
from datetime import datetime, timedelta
cutoff_date = datetime.now() - timedelta(days=days_to_keep)
backup_pattern = f"{self.backup_dir}/{self.db_name}_backup_*.sql.gz"
backup_files = glob.glob(backup_pattern)
for backup_file in backup_files:
file_time = datetime.fromtimestamp(os.path.getmtime(backup_file))
if file_time < cutoff_date:
os.remove(backup_file)
logger.info(f"删除旧备份: {backup_file}")
def restore_backup(self, backup_file):
"""恢复数据库备份"""
try:
cmd = [
'psql',
'-h', self.db_host,
'-U', self.db_user,
'-d', self.db_name,
'-f', backup_file
]
env = os.environ.copy()
env['PGPASSWORD'] = settings.DATABASES['default']['PASSWORD']
result = subprocess.run(cmd, env=env, capture_output=True, text=True)
if result.returncode == 0:
logger.info(f"数据库恢复成功: {backup_file}")
return True
else:
logger.error(f"数据库恢复失败: {result.stderr}")
return False
except Exception as e:
logger.error(f"恢复过程中发生错误: {str(e)}")
return False
# 定时备份任务
from celery import shared_task
@shared_task
def scheduled_backup():
"""定时备份任务"""
manager = DatabaseBackupManager()
backup_file = manager.create_backup()
if backup_file:
# 发送备份成功通知
from django.core.mail import send_mail
send_mail(
'数据库备份成功',
f'数据库备份已创建: {backup_file}',
'admin@yourdomain.com',
['admin@yourdomain.com'],
)
return backup_file#缓存服务部署
#Redis生产配置
# redis.conf (生产环境配置)
# 网络配置
bind 127.0.0.1
port 6379
timeout 0
tcp-keepalive 300
# 通用配置
daemonize no
supervised systemd
pidfile /var/run/redis/redis-server.pid
loglevel notice
logfile /var/log/redis/redis-server.log
databases 16
# 客户端限制
maxclients 10000
# 内存管理
maxmemory 2gb
maxmemory-policy allkeys-lru
maxmemory-samples 5
# 持久化配置
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /var/lib/redis
# 复制配置
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
# 安全配置
# requirepass yourpassword # 启用密码认证
# 限制配置
rename-command FLUSHDB ""
rename-command FLUSHALL ""
rename-command EVAL ""
rename-command EVALSHA ""#缓存监控
# cache_monitor.py
import redis
import time
import logging
from django.conf import settings
logger = logging.getLogger(__name__)
class CacheMonitor:
"""缓存监控器"""
def __init__(self):
self.redis_client = redis.Redis(
host=settings.CACHES['default']['LOCATION'].split(':')[1],
port=settings.CACHES['default']['LOCATION'].split(':')[2],
db=0
)
def get_cache_stats(self):
"""获取缓存统计信息"""
info = self.redis_client.info()
stats = {
'used_memory': info['used_memory_human'],
'used_memory_peak': info['used_memory_peak_human'],
'connected_clients': info['connected_clients'],
'total_commands_processed': info['total_commands_processed'],
'keyspace_hits': info['keyspace_hits'],
'keyspace_misses': info['keyspace_misses'],
'hit_rate': self.calculate_hit_rate(info),
'expired_keys': info['expired_keys'],
'evicted_keys': info['evicted_keys'],
}
return stats
def calculate_hit_rate(self, info):
"""计算缓存命中率"""
hits = info['keyspace_hits']
misses = info['keyspace_misses']
total = hits + misses
if total == 0:
return 0.0
return round((hits / total) * 100, 2)
def monitor_cache_health(self):
"""监控缓存健康状态"""
stats = self.get_cache_stats()
# 检查内存使用率
memory_percent = (int(stats['used_memory'][:-1]) / 2) * 100 # 假设最大2GB
if memory_percent > 80:
logger.warning(f"Redis内存使用率过高: {memory_percent}%")
# 检查命中率
if stats['hit_rate'] < 70:
logger.warning(f"Redis缓存命中率低: {stats['hit_rate']}%")
# 检查驱逐键数量
if stats['evicted_keys'] > 0:
logger.warning(f"Redis有键被驱逐: {stats['evicted_keys']}")
return stats
def get_top_keys(self, count=10):
"""获取最常用的键"""
# 这需要Redis配置开启KEYSPACE通知
# notify-keyspace-events Ex
pass
def cleanup_expired_keys(self):
"""清理过期键"""
# Redis会自动清理过期键,这里可以执行额外的清理操作
pass
# 缓存性能测试
def benchmark_cache_performance():
"""缓存性能基准测试"""
import time
cache_monitor = CacheMonitor()
redis_client = cache_monitor.redis_client
# 测试写入性能
start_time = time.time()
for i in range(1000):
redis_client.set(f'test_key_{i}', f'test_value_{i}')
write_time = time.time() - start_time
# 测试读取性能
start_time = time.time()
for i in range(1000):
redis_client.get(f'test_key_{i}')
read_time = time.time() - start_time
# 测试删除性能
start_time = time.time()
for i in range(1000):
redis_client.delete(f'test_key_{i}')
delete_time = time.time() - start_time
print(f"写入1000个键耗时: {write_time:.4f}秒")
print(f"读取1000个键耗时: {read_time:.4f}秒")
print(f"删除1000个键耗时: {delete_time:.4f}秒")#监控与日志
#应用日志配置
# logging_config.py
"""
# settings.py 中的日志配置
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'verbose': {
'format': '{levelname} {asctime} {module} {process:d} {thread:d} {message}',
'style': '{',
},
'simple': {
'format': '{levelname} {message}',
'style': '{',
},
'json': {
'()': 'pythonjsonlogger.jsonlogger.JsonFormatter',
'format': '%(asctime)s %(name)s %(levelname)s %(filename)s %(lineno)d %(message)s'
},
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'simple'
},
'file': {
'class': 'logging.handlers.RotatingFileHandler',
'filename': '/var/log/myapp/app.log',
'maxBytes': 1024*1024*10, # 10MB
'backupCount': 5,
'formatter': 'verbose',
},
'error_file': {
'class': 'logging.handlers.RotatingFileHandler',
'filename': '/var/log/myapp/error.log',
'maxBytes': 1024*1024*10, # 10MB
'backupCount': 5,
'level': 'ERROR',
'formatter': 'verbose',
},
'performance_file': {
'class': 'logging.handlers.RotatingFileHandler',
'filename': '/var/log/myapp/performance.log',
'maxBytes': 1024*1024*10, # 10MB
'backupCount': 5,
'level': 'INFO',
'formatter': 'verbose',
},
},
'root': {
'handlers': ['console', 'file'],
'level': 'INFO',
},
'loggers': {
'django': {
'handlers': ['file'],
'level': 'INFO',
'propagate': False,
},
'django.request': {
'handlers': ['error_file'],
'level': 'ERROR',
'propagate': False,
},
'performance': {
'handlers': ['performance_file'],
'level': 'INFO',
'propagate': False,
},
'myapp': {
'handlers': ['file', 'error_file'],
'level': 'INFO',
'propagate': False,
},
},
}
"""#系统监控
# system_monitor.py
import psutil
import time
import logging
from threading import Thread
import smtplib
from email.mime.text import MIMEText
from django.conf import settings
logger = logging.getLogger(__name__)
class SystemMonitor:
"""系统监控器"""
def __init__(self):
self.alert_thresholds = {
'cpu_percent': 80,
'memory_percent': 85,
'disk_percent': 90,
'process_count': 500,
}
self.monitoring = False
def get_system_stats(self):
"""获取系统统计信息"""
stats = {
'timestamp': time.time(),
'cpu_percent': psutil.cpu_percent(interval=1),
'memory_percent': psutil.virtual_memory().percent,
'disk_percent': psutil.disk_usage('/').percent,
'process_count': len(psutil.pids()),
'load_average': psutil.getloadavg(),
'network_io': psutil.net_io_counters(),
'boot_time': psutil.boot_time(),
}
return stats
def check_alerts(self, stats):
"""检查是否触发警报"""
alerts = []
for metric, threshold in self.alert_thresholds.items():
if stats[metric] > threshold:
alert_msg = f"ALERT: {metric} is {stats[metric]}%, threshold is {threshold}%"
alerts.append(alert_msg)
logger.warning(alert_msg)
# 检查Django进程
django_processes = [p for p in psutil.process_iter()
if 'gunicorn' in p.name() or 'python' in p.name()]
if len(django_processes) == 0:
alert_msg = "ALERT: No Django processes running"
alerts.append(alert_msg)
logger.error(alert_msg)
return alerts
def send_alert_email(self, alerts):
"""发送警报邮件"""
if not alerts:
return
subject = f"系统警报 - {len(alerts)} 个问题"
body = "\n".join(alerts)
try:
msg = MIMEText(body)
msg['Subject'] = subject
msg['From'] = settings.SERVER_EMAIL
msg['To'] = settings.ADMINS[0][1] if settings.ADMINS else settings.SERVER_EMAIL
server = smtplib.SMTP(settings.EMAIL_HOST, settings.EMAIL_PORT)
server.starttls()
server.login(settings.EMAIL_HOST_USER, settings.EMAIL_HOST_PASSWORD)
server.send_message(msg)
server.quit()
logger.info("警报邮件发送成功")
except Exception as e:
logger.error(f"发送警报邮件失败: {str(e)}")
def start_monitoring(self):
"""开始监控"""
self.monitoring = True
def monitor_loop():
while self.monitoring:
try:
stats = self.get_system_stats()
alerts = self.check_alerts(stats)
if alerts:
self.send_alert_email(alerts)
time.sleep(60) # 每分钟检查一次
except Exception as e:
logger.error(f"监控循环错误: {str(e)}")
time.sleep(60)
monitor_thread = Thread(target=monitor_loop, daemon=True)
monitor_thread.start()
def stop_monitoring(self):
"""停止监控"""
self.monitoring = False
# 性能监控中间件
class PerformanceMonitoringMiddleware:
"""性能监控中间件"""
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
import time
from django.db import connection
start_time = time.time()
start_queries = len(connection.queries)
response = self.get_response(request)
duration = time.time() - start_time
query_count = len(connection.queries) - start_queries
# 记录性能数据
perf_logger = logging.getLogger('performance')
perf_logger.info(
f"REQUEST: {request.method} {request.path} "
f"DURATION: {duration:.3f}s "
f"QUERIES: {query_count} "
f"STATUS: {response.status_code}"
)
# 性能警报
if duration > 2.0: # 超过2秒的请求
perf_logger.warning(
f"SLOW REQUEST: {request.path} took {duration:.3f}s"
)
if query_count > 50: # 超过50个查询
perf_logger.warning(
f"HIGH QUERY COUNT: {request.path} had {query_count} queries"
)
return response#CI/CD集成
#GitHub Actions CI/CD
# .github/workflows/deploy.yml
name: Deploy to Production
on:
push:
branches: [ main, master ]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:15
env:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: test_db
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis:
image: redis:7-alpine
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install pytest coverage
- name: Run migrations
run: |
python manage.py migrate
env:
DJANGO_SETTINGS_MODULE: myproject.settings.test
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test_db
REDIS_URL: redis://localhost:6379/0
- name: Run tests
run: |
python -m pytest --cov=myapp --cov-report=xml
env:
DJANGO_SETTINGS_MODULE: myproject.settings.test
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test_db
REDIS_URL: redis://localhost:6379/0
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
build-and-push:
needs: test
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Log in to Container Registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
deploy:
needs: build-and-push
runs-on: [self-hosted, linux, x64]
steps:
- name: Pull latest image
run: |
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
- name: Stop old containers
run: |
docker-compose down
- name: Start new containers
run: |
docker-compose up -d
sleep 30 # 等待服务启动
- name: Run migrations
run: |
docker-compose exec web python manage.py migrate
- name: Collect static files
run: |
docker-compose exec web python manage.py collectstatic --noinput
- name: Health check
run: |
# 等待应用响应
timeout 60 bash -c 'until curl -f http://localhost:8000/health/; do sleep 2; done'#部署脚本
#!/bin/bash
# deploy.sh - 自动部署脚本
set -e # 遇到错误立即退出
# 配置变量
APP_NAME="myapp"
APP_DIR="/app"
BACKUP_DIR="/backups"
LOG_DIR="/var/log/myapp"
# 颜色输出
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
log() {
echo -e "${GREEN}[INFO]$(date '+%Y-%m-%d %H:%M:%S')${NC} $1"
}
warn() {
echo -e "${YELLOW}[WARN]$(date '+%Y-%m-%d %H:%M:%S')${NC} $1"
}
error() {
echo -e "${RED}[ERROR]$(date '+%Y-%m-%d %H:%M:%S')${NC} $1"
}
# 备份当前版本
backup_current_version() {
log "Creating backup of current version..."
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_NAME="${BACKUP_DIR}/${APP_NAME}_${TIMESTAMP}"
mkdir -p $BACKUP_NAME
# 备份代码
cp -r $APP_DIR $BACKUP_NAME/code
# 备份数据库
if [ -f "/usr/bin/pg_dump" ]; then
pg_dump myapp_db > $BACKUP_NAME/database.sql
fi
log "Backup created: $BACKUP_NAME"
}
# 检查系统资源
check_resources() {
log "Checking system resources..."
# 检查磁盘空间
DISK_USAGE=$(df $APP_DIR | awk 'NR==2 {print $5}' | sed 's/%//')
if [ $DISK_USAGE -gt 80 ]; then
warn "Disk usage is high: ${DISK_USAGE}%"
fi
# 检查内存
MEMORY_USAGE=$(free | awk 'NR==2{printf "%.2f", $3*100/$2}')
if (( $(echo "$MEMORY_USAGE > 80" | bc -l) )); then
warn "Memory usage is high: ${MEMORY_USAGE}%"
fi
}
# 部署新版本
deploy_new_version() {
log "Deploying new version..."
cd $APP_DIR
# 拉取最新代码
git fetch origin
git reset --hard origin/main
# 激活虚拟环境
source venv/bin/activate
# 安装/升级依赖
pip install --upgrade pip
pip install -r requirements.txt
# 运行数据库迁移
python manage.py migrate
# 收集静态文件
python manage.py collectstatic --noinput
# 重启服务
supervisorctl restart myapp:*
log "Deployment completed successfully!"
}
# 健康检查
health_check() {
log "Performing health check..."
# 检查服务状态
for service in web celery_worker celery_beat; do
STATUS=$(supervisorctl status myapp:$service | awk '{print $2}')
if [ "$STATUS" != "RUNNING" ]; then
error "Service $service is not running: $STATUS"
return 1
fi
done
# 检查API响应
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:8000/health/)
if [ "$HTTP_CODE" != "200" ]; then
error "Health check failed: HTTP $HTTP_CODE"
return 1
fi
log "Health check passed"
return 0
}
# 主函数
main() {
log "Starting deployment process..."
# 检查权限
if [ "$EUID" -ne 0 ]; then
error "Please run as root"
exit 1
fi
# 执行部署步骤
check_resources
backup_current_version
deploy_new_version
# 等待服务启动
sleep 10
# 执行健康检查
if health_check; then
log "Deployment successful!"
exit 0
else
error "Health check failed, rolling back..."
# 这里可以实现回滚逻辑
exit 1
fi
}
# 执行主函数
main "$@"#本章小结
在本章中,我们深入学习了Django部署最佳实践:
- 部署概述:了解了部署架构和核心原则
- 环境准备:掌握了服务器环境和Python环境配置
- Docker容器化部署:学习了Docker配置和Compose编排
- Nginx配置:掌握了反向代理和安全配置
- Gunicorn部署:了解了WSGI服务器配置和性能调优
- 数据库部署:学习了PostgreSQL生产配置和备份策略
- 缓存服务部署:掌握了Redis配置和监控
- 监控与日志:了解了系统监控和日志管理
- CI/CD集成:学习了自动化部署流程
#核心要点回顾
"""
本章核心要点:
1. 生产环境部署需要考虑安全性、可用性和性能
2. Docker容器化简化了部署和扩展
3. Nginx作为反向代理提供负载均衡和SSL终止
4. Gunicorn的配置直接影响应用性能
5. 数据库和缓存服务需要专门的生产配置
6. 监控和日志是运维的重要组成部分
7. CI/CD自动化提高了部署效率和可靠性
8. 定期备份和健康检查确保系统稳定性
"""💡 核心要点:部署是应用生命周期的重要环节,需要综合考虑安全性、性能、可维护性和扩展性。建立完善的监控和自动化流程,是确保生产环境稳定运行的关键。
#SEO优化策略
- 关键词布局: 在标题、内容中合理布局"Django部署", "Docker", "Nginx", "Gunicorn", "生产环境", "DevOps", "容器化"等关键词
- 内容结构: 使用清晰的标题层级(H1-H3),便于搜索引擎理解内容结构
- 内部链接: 建立与其他相关教程的内部链接,提升页面权重
- 代码示例: 提供丰富的实际代码示例,增加页面价值
- 元数据优化: 在页面头部包含描述性的标题、描述和标签
🔗 相关教程推荐
🏷️ 标签云: Django部署 Docker Nginx Gunicorn 生产环境 DevOps 容器化 CI/CD 监控 自动化部署

