FastAPI Docker容器化部署完全指南

📂 所属阶段:第五阶段 — 工程化与部署(实战篇)
🔗 相关章节:Nginx与Gunicorn生产部署 · Pydantic Settings多环境配置

目录

Docker容器化的重要性

为什么选择Docker部署?

在现代软件开发中,Docker已经成为标准化的部署方式。它解决了传统的"在我电脑上明明能跑!"的问题:

传统部署困境:
┌─────────────────────────────────────────────────────┐
│  开发环境:Python 3.11, PostgreSQL 13, Redis 7    │
│  测试环境:Python 3.10, PostgreSQL 14, Redis 6    │
│  生产环境:Python 3.11, PostgreSQL 15, Redis 7    │
│                        ↓                          │
│                环境不一致性问题                     │
└─────────────────────────────────────────────────────┘

Docker解决方案:
┌─────────────────────────────────────────────────────┐
│  项目代码 + Dockerfile → Docker Image → Container │
│              无论在哪都完全一致                      │
└─────────────────────────────────────────────────────┘

Docker部署的核心优势

  1. 环境一致性:开发、测试、生产环境完全一致
  2. 快速部署:秒级启动和停止
  3. 资源隔离:避免依赖冲突
  4. 可移植性:一次构建,到处运行
  5. 弹性伸缩:根据负载自动扩缩容

Dockerfile最佳实践

基础Dockerfile结构

让我们从一个生产级的Dockerfile开始:

# 使用官方Python基础镜像
FROM python:3.11-slim as base

# 设置环境变量
ENV PYTHONUNBUFFERED=1 \
    PYTHONDONTWRITEBYTECODE=1 \
    PIP_NO_CACHE_DIR=1 \
    PIP_DISABLE_PIP_VERSION_CHECK=1

# 设置工作目录
WORKDIR /app

# 复制依赖文件
COPY requirements.txt .

# 安装系统依赖
RUN apt-get update && apt-get install -y \
    gcc \
    g++ \
    && rm -rf /var/lib/apt/lists/*

# 安装Python依赖
RUN pip install --upgrade pip && \
    pip install --no-cache-dir -r requirements.txt

# 从基础镜像复制到最终镜像
FROM python:3.11-slim

# 设置环境变量
ENV PYTHONUNBUFFERED=1 \
    PYTHONDONTWRITEBYTECODE=1 \
    PIP_NO_CACHE_DIR=1 \
    PIP_DISABLE_PIP_VERSION_CHECK=1

# 设置工作目录
WORKDIR /app

# 从基础镜像复制已安装的依赖
COPY --from=base /usr/local/lib/python3.11/site-packages /usr/local/lib/python3.11/site-packages
COPY --from=base /usr/local/bin /usr/local/bin

# 创建非root用户(安全最佳实践)
RUN groupadd -r appgroup && useradd -r -g appgroup appuser

# 复制应用代码
COPY . .

# 更改文件所有权
RUN chown -R appuser:appgroup /app

# 切换到非root用户
USER appuser

# 暴露端口
EXPOSE 8000

# 健康检查
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
    CMD curl -f http://localhost:8000/health || exit 1

# 启动命令
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "4"]

多阶段构建优化

多阶段构建可以显著减小最终镜像的大小:

# 构建阶段
FROM python:3.11-slim as builder

# 设置环境变量
ENV PYTHONUNBUFFERED=1 \
    PYTHONDONTWRITEBYTECODE=1 \
    PIP_NO_CACHE_DIR=1 \
    PIP_DISABLE_PIP_VERSION_CHECK=1

WORKDIR /app

# 安装构建依赖
RUN apt-get update && apt-get install -y \
    gcc \
    g++ \
    && rm -rf /var/lib/apt/lists/*

# 安装Poetry
RUN pip install --no-cache-dir poetry

# 复制pyproject.toml和poetry.lock
COPY pyproject.toml poetry.lock* ./

# 安装依赖到虚拟环境
RUN poetry config virtualenvs.create false && \
    poetry install --no-dev --no-interaction --no-ansi

# 运行阶段
FROM python:3.11-slim as runtime

# 设置环境变量
ENV PYTHONUNBUFFERED=1 \
    PYTHONDONTWRITEBYTECODE=1 \
    PIP_NO_CACHE_DIR=1 \
    PIP_DISABLE_PIP_VERSION_CHECK=1

WORKDIR /app

# 安装运行时依赖
RUN apt-get update && apt-get install -y \
    curl \
    && rm -rf /var/lib/apt/lists/*

# 从构建阶段复制已安装的包
COPY --from=builder /usr/local/lib/python3.11/site-packages /usr/local/lib/python3.11/site-packages
COPY --from=builder /usr/local/bin /usr/local/bin

# 创建非root用户
RUN groupadd -r appuser && useradd -r -g appuser appuser

# 复制应用代码
COPY --chown=appuser:appuser . .

# 切换到非root用户
USER appuser

# 暴露端口
EXPOSE 8000

# 健康检查
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
    CMD curl -f http://localhost:8000/health || exit 1

# 启动命令
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "4"]

requirements.txt优化

# requirements.txt - FastAPI生产环境依赖
fastapi>=0.109.0
uvicorn[standard]>=0.27.0
gunicorn>=21.2.0
uvloop>=0.19.0
httptools>=0.6.1
sqlalchemy[asyncio]>=2.0.0
asyncpg>=0.29.0
redis[hiredis]>=5.0.0
python-jose[cryptography]>=3.3.0
passlib[bcrypt]>=1.7.4
pydantic-settings>=2.0.0
python-multipart>=0.0.6
httpx>=0.25.0
pydantic[email]>=2.0.0
alembic>=1.13.0
celery>=5.3.0
flower>=2.0.0
prometheus-client>=0.19.0

Docker Compose编排

本地开发环境编排

# docker-compose.yml - 本地开发环境
version: "3.9"

services:
  # FastAPI应用服务
  api:
    build:
      context: .
      dockerfile: Dockerfile.dev
    container_name: daoman_fastapi_dev
    ports:
      - "8000:8000"
    environment:
      - ENV=development
      - DEBUG=true
      - DATABASE_URL=postgresql+asyncpg://postgres:postgres@db:5432/daoman_dev
      - REDIS_URL=redis://redis:6379/0
      - JWT_SECRET=dev-secret-change-in-production
      - LOG_LEVEL=debug
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_healthy
    volumes:
      - .:/app  # 热重载开发
    command: uvicorn main:app --host 0.0.0.0 --port 8000 --reload --log-level debug

  # PostgreSQL数据库
  db:
    image: postgres:16-alpine
    container_name: daoman_postgres_dev
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: daoman_dev
    ports:
      - "5432:5432"
    volumes:
      - postgres_dev_data:/var/lib/postgresql/data
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 5s
      retries: 5
      start_period: 10s

  # Redis缓存
  redis:
    image: redis:7-alpine
    container_name: daoman_redis_dev
    ports:
      - "6379:6379"
    volumes:
      - redis_dev_data:/data
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 5s
      timeout: 3s
      retries: 3

  # Adminer数据库管理工具
  adminer:
    image: adminer
    container_name: daoman_adminer_dev
    ports:
      - "8080:8080"
    depends_on:
      - db

volumes:
  postgres_dev_data:
  redis_dev_data:

生产环境编排

# docker-compose.prod.yml - 生产环境
version: "3.9"

services:
  # Nginx反向代理
  nginx:
    image: nginx:alpine
    container_name: daoman_nginx_prod
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/conf.d:/etc/nginx/conf.d:ro
      - ./ssl:/etc/nginx/ssl:ro
    depends_on:
      - api
    networks:
      - app-network

  # FastAPI应用服务
  api:
    build:
      context: .
      dockerfile: Dockerfile.prod
    image: daoman_fastapi:latest
    container_name: daoman_fastapi_prod
    restart: always
    expose:
      - "8000"
    environment:
      - ENV=production
      - DATABASE_URL=${DATABASE_URL}
      - REDIS_URL=${REDIS_URL}
      - JWT_SECRET=${JWT_SECRET}
      - LOG_LEVEL=info
      - WORKERS=4
      - MAX_WORKERS=8
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_healthy
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 60s
    networks:
      - app-network

  # PostgreSQL数据库
  db:
    image: postgres:16-alpine
    container_name: daoman_postgres_prod
    restart: always
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - postgres_prod_data:/var/lib/postgresql/data
      - ./backup:/backup
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - app-network

  # Redis缓存
  redis:
    image: redis:7-alpine
    container_name: daoman_redis_prod
    restart: always
    command: redis-server --appendonly yes --maxmemory 256mb --maxmemory-policy allkeys-lru
    volumes:
      - redis_prod_data:/data
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 5s
      timeout: 3s
      retries: 3
    networks:
      - app-network

  # Celery任务队列
  celery:
    build:
      context: .
      dockerfile: Dockerfile.prod
    image: daoman_fastapi:latest
    container_name: daoman_celery_prod
    restart: always
    command: celery -A tasks worker --loglevel=info
    environment:
      - DATABASE_URL=${DATABASE_URL}
      - REDIS_URL=${REDIS_URL}
      - ENV=production
    depends_on:
      - redis
      - db
    networks:
      - app-network

  # Flower监控
  flower:
    image: mher/flower:latest
    container_name: daoman_flower_prod
    restart: always
    ports:
      - "5555:5555"
    environment:
      - CELERY_BROKER_URL=redis://redis:6379/0
      - FLOWER_PORT=5555
    depends_on:
      - celery
    networks:
      - app-network

networks:
  app-network:
    driver: bridge

volumes:
  postgres_prod_data:
  redis_prod_data:

生产环境配置

Nginx配置

# nginx/conf.d/fastapi.conf - Nginx配置
upstream fastapi_app {
    server api:8000;
    keepalive 32;
}

server {
    listen 80;
    server_name your-domain.com www.your-domain.com;

    # 日志配置
    access_log /var/log/nginx/fastapi.access.log;
    error_log /var/log/nginx/fastapi.error.log;

    # 安全头部
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header Referrer-Policy "no-referrer-when-downgrade" always;
    add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;

    # 客户端请求配置
    client_max_body_size 100M;
    client_body_timeout 120s;
    client_header_timeout 120s;

    # Gzip压缩
    gzip on;
    gzip_vary on;
    gzip_min_length 1024;
    gzip_proxied expired no-cache no-store private must-revalidate auth;
    gzip_types
        application/atom+xml
        application/javascript
        application/json
        application/rss+xml
        application/vnd.ms-fontobject
        application/x-font-ttf
        application/x-web-app-manifest+json
        application/xhtml+xml
        application/xml
        font/opentype
        image/svg+xml
        text/css
        text/plain
        text/xml;

    # API路由
    location / {
        proxy_pass http://fastapi_app;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_redirect off;
        proxy_buffering off;
        proxy_cache off;

        # 超时配置
        proxy_connect_timeout 60s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;
    }

    # 静态文件服务
    location /static {
        alias /app/static;
        expires 30d;
        add_header Cache-Control "public, immutable";
    }

    # 健康检查
    location /health {
        access_log off;
        proxy_pass http://fastapi_app/health;
    }
}

环境变量配置

# .env.production - 生产环境变量
ENV=production
DEBUG=false
LOG_LEVEL=info

# 数据库配置
DATABASE_URL=postgresql+asyncpg://user:password@host:5432/dbname
POSTGRES_DB=daoman_prod
POSTGRES_USER=daoman_user
POSTGRES_PASSWORD=secure_password_here

# Redis配置
REDIS_URL=redis://redis:6379/0
CELERY_BROKER_URL=redis://redis:6379/0
CELERY_RESULT_BACKEND=redis://redis:6379/0

# JWT配置
JWT_SECRET=your_super_secret_jwt_key_here_change_this
JWT_ALGORITHM=HS256
ACCESS_TOKEN_EXPIRE_MINUTES=30
REFRESH_TOKEN_EXPIRE_DAYS=7

# 应用配置
WORKERS=4
MAX_WORKERS=8
TIMEOUT=300
KEEP_ALIVE=5

# 监控配置
PROMETHEUS_MULTIPROC_DIR=/tmp

安全最佳实践

容器安全配置

# 安全加固的Dockerfile
FROM python:3.11-slim as base

# 设置环境变量
ENV PYTHONUNBUFFERED=1 \
    PYTHONDONTWRITEBYTECODE=1 \
    PIP_NO_CACHE_DIR=1 \
    PIP_DISABLE_PIP_VERSION_CHECK=1

WORKDIR /app

# 安装系统依赖
RUN apt-get update && apt-get install -y \
    gcc \
    g++ \
    curl \
    ca-certificates \
    && rm -rf /var/lib/apt/lists/*

# 安装Python依赖
COPY requirements.txt .
RUN pip install --upgrade pip && \
    pip install --no-cache-dir -r requirements.txt

# 创建专用用户组和用户
RUN groupadd -r appgroup --gid 1001 && \
    useradd -r -g appgroup --uid 1001 appuser

FROM python:3.11-slim

# 设置环境变量
ENV PYTHONUNBUFFERED=1 \
    PYTHONDONTWRITEBYTECODE=1 \
    PIP_NO_CACHE_DIR=1 \
    PIP_DISABLE_PIP_VERSION_CHECK=1

WORKDIR /app

# 从基础镜像复制依赖
COPY --from=base /usr/local/lib/python3.11/site-packages /usr/local/lib/python3.11/site-packages
COPY --from=base /usr/local/bin /usr/local/bin

# 复制应用代码
COPY --chown=appuser:appgroup . .

# 切换到非root用户
USER appuser

# 暴露端口
EXPOSE 8000

# 健康检查
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
    CMD curl -f http://localhost:8000/health || exit 1

# 启动命令
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "4"]

Docker Compose安全配置

# docker-compose.security.yml
version: "3.9"

services:
  api:
    build:
      context: .
      dockerfile: Dockerfile.secure
    container_name: daoman_fastapi_secure
    restart: unless-stopped
    security_opt:
      - no-new-privileges:true
    cap_drop:
      - ALL
    read_only: true
    tmpfs:
      - /tmp
      - /var/tmp
    volumes:
      - ./logs:/app/logs:rw
      - ./uploads:/app/uploads:rw
    environment:
      - ENV=production
      - DATABASE_URL=${DATABASE_URL}
      - REDIS_URL=${REDIS_URL}
    sysctls:
      - net.core.somaxconn=1024
    ulimits:
      nproc: 65535
      nofile:
        soft: 20000
        hard: 40000
    networks:
      - app-network

networks:
  app-network:
    driver: bridge
    internal: false  # 设为true可阻止外部访问

镜像优化技巧

镜像大小优化

# 轻量级生产Dockerfile
FROM python:3.11-alpine as builder

WORKDIR /app

# 安装构建依赖
RUN apk add --no-cache \
    gcc \
    musl-dev \
    libffi-dev \
    openssl-dev \
    cargo \
    rust

# 复制并安装Python依赖
COPY requirements.txt .
RUN pip install --upgrade pip && \
    pip install --user --no-cache-dir -r requirements.txt

FROM python:3.11-alpine

# 安装运行时依赖
RUN apk add --no-cache \
    curl \
    bash

WORKDIR /app

# 创建用户
RUN addgroup -g 1001 -S appgroup && \
    adduser -u 1001 -S appuser -G appgroup

# 从构建阶段复制依赖
COPY --from=builder --chown=appuser:appgroup /root/.local /home/appuser/.local

# 复制应用代码
COPY --chown=appuser:appgroup . .

# 设置PATH
ENV PATH=/home/appuser/.local/bin:$PATH

# 切换用户
USER appuser

EXPOSE 8000

HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
    CMD curl -f http://localhost:8000/health || exit 1

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "4"]

构建优化技巧

# 构建优化脚本
#!/bin/bash

echo "🚀 开始构建优化的FastAPI Docker镜像..."

# 使用BuildKit加速构建
export DOCKER_BUILDKIT=1

# 多平台构建
docker buildx build \
  --platform linux/amd64,linux/arm64 \
  --tag daoman_fastapi:latest \
  --tag daoman_fastapi:v1.0.0 \
  --file Dockerfile.optimized \
  --cache-from type=registry,ref=daoman_fastapi:latest \
  --cache-to type=inline \
  --push .

echo "✅ 镜像构建完成!"

# 镜像分析
echo "📊 镜像大小分析:"
docker images daoman_fastapi

# 清理构建缓存
docker builder prune -f

健康检查与监控

应用健康检查端点

# health_check.py - 健康检查端点
from fastapi import APIRouter, HTTPException
from pydantic import BaseModel
from datetime import datetime
import asyncio
import logging

router = APIRouter()

class HealthStatus(BaseModel):
    status: str
    timestamp: str
    services: dict
    version: str

@router.get("/health", response_model=HealthStatus)
async def health_check():
    """健康检查端点"""
    import time
    start_time = time.time()
    
    # 检查各个服务状态
    services_status = {
        "database": await check_database_connection(),
        "redis": await check_redis_connection(),
        "external_api": await check_external_api(),
    }
    
    # 计算总状态
    overall_status = "healthy" if all(services_status.values()) else "degraded"
    
    return HealthStatus(
        status=overall_status,
        timestamp=datetime.now().isoformat(),
        services=services_status,
        version="1.0.0"
    )

async def check_database_connection():
    """检查数据库连接"""
    try:
        # 实现数据库连接检查逻辑
        # 这里只是示例
        await asyncio.sleep(0.1)  # 模拟数据库调用
        return True
    except Exception as e:
        logging.error(f"Database connection check failed: {e}")
        return False

async def check_redis_connection():
    """检查Redis连接"""
    try:
        # 实现Redis连接检查逻辑
        await asyncio.sleep(0.05)  # 模拟Redis调用
        return True
    except Exception as e:
        logging.error(f"Redis connection check failed: {e}")
        return False

async def check_external_api():
    """检查外部API"""
    try:
        # 实现外部API检查逻辑
        await asyncio.sleep(0.1)  # 模拟外部调用
        return True
    except Exception as e:
        logging.error(f"External API check failed: {e}")
        return False

# 添加到主应用
# app.include_router(router)

监控配置

# prometheus.yml - Prometheus配置
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'fastapi-app'
    static_configs:
      - targets: ['api:8000']
    metrics_path: '/metrics'
    scheme: 'http'

  - job_name: 'nginx'
    static_configs:
      - targets: ['nginx:9113']

  - job_name: 'postgres-exporter'
    static_configs:
      - targets: ['postgres-exporter:9187']

  - job_name: 'redis-exporter'
    static_configs:
      - targets: ['redis-exporter:9121']

CI/CD集成

GitHub Actions CI/CD流水线

# .github/workflows/docker.yml
name: Docker Build and Push

on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v4
    
    - name: Set up Python
      uses: actions/setup-python@v4
      with:
        python-version: '3.11'
    
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install -r requirements.txt
        pip install pytest pytest-cov
    
    - name: Run tests
      run: |
        pytest tests/ -v --cov=app --cov-report=xml
    
    - name: Upload coverage
      uses: codecov/codecov-action@v3

  build-and-push:
    needs: test
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main'
    
    steps:
    - name: Checkout
      uses: actions/checkout@v4
    
    - name: Docker meta
      id: meta
      uses: docker/metadata-action@v5
      with:
        images: your-registry/daoman-fastapi
        tags: |
          type=ref,event=branch
          type=ref,event=pr
          type=sha,prefix={{branch}}-
          type=raw,value=latest,enable={{is_default_branch}}
    
    - name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v3
    
    - name: Login to Registry
      uses: docker/login-action@v3
      with:
        registry: your-registry.com
        username: ${{ secrets.DOCKER_USERNAME }}
        password: ${{ secrets.DOCKER_PASSWORD }}
    
    - name: Build and push
      uses: docker/build-push-action@v5
      with:
        context: .
        platforms: linux/amd64,linux/arm64
        push: true
        tags: ${{ steps.meta.outputs.tags }}
        labels: ${{ steps.meta.outputs.labels }}
        cache-from: type=gha
        cache-to: type=gha,mode=max

  deploy:
    needs: build-and-push
    runs-on: ubuntu-latest
    environment: production
    
    steps:
    - name: Deploy to Production
      run: |
        echo "Deploying to production..."
        # 这里添加实际的部署命令

Docker Compose部署脚本

#!/bin/bash
# deploy.sh - 生产部署脚本

set -e  # 遇到错误立即退出

echo "🚀 开始部署FastAPI应用..."

# 拉取最新的镜像
echo " pulling latest images..."
docker-compose -f docker-compose.prod.yml pull

# 停止现有服务
echo " stopping existing services..."
docker-compose -f docker-compose.prod.yml down

# 启动新服务(零停机部署)
echo " starting new services..."
docker-compose -f docker-compose.prod.yml up -d --no-deps --build

# 等待服务启动
echo " waiting for services to start..."
sleep 30

# 检查服务状态
echo " checking service health..."
docker-compose -f docker-compose.prod.yml ps

# 运行数据库迁移(如果需要)
echo " running database migrations..."
docker-compose -f docker-compose.prod.yml exec api alembic upgrade head

echo "✅ 部署完成!"

# 显示日志
echo "📋 最近的日志:"
docker-compose -f docker-compose.prod.yml logs --tail=20 api

故障排除与调试

常见问题诊断

# 1. 查看容器日志
docker logs daoman_fastapi_prod

# 2. 进入容器调试
docker exec -it daoman_fastapi_prod /bin/sh

# 3. 检查网络连接
docker exec -it daoman_fastapi_prod ping db
docker exec -it daoman_fastapi_prod ping redis

# 4. 检查环境变量
docker exec -it daoman_fastapi_prod env

# 5. 检查磁盘空间
docker system df

# 6. 清理Docker资源
docker system prune -f
docker volume prune -f

性能监控脚本

#!/bin/bash
# monitor.sh - 性能监控脚本

echo "📊 Docker容器性能监控:"

# 显示所有容器的资源使用情况
docker stats --no-stream

# 显示特定容器的详细信息
echo -e "\n📋 FastAPI容器详细信息:"
docker inspect daoman_fastapi_prod | grep -E "(State|Mounts|NetworkSettings)"

# 检查应用响应时间
echo -e "\n⏱️  应用响应时间测试:"
curl -w "连接时间: %{time_connect}s\nDNS时间: %{time_namelookup}s\n总时间: %{time_total}s\n" -o /dev/null -s http://localhost:8000/health

echo -e "\n✅ 监控完成!"

性能优化

性能优化配置

# performance_config.py - 性能优化配置
import multiprocessing
from typing import Optional

class PerformanceConfig:
    """性能配置类"""
    
    @staticmethod
    def get_worker_count() -> int:
        """获取最优工作进程数"""
        cpu_count = multiprocessing.cpu_count()
        return min(cpu_count, 8)  # 限制最大工作进程数
    
    @staticmethod
    def get_max_workers() -> int:
        """获取最大工作线程数"""
        return min(multiprocessing.cpu_count() * 2, 16)
    
    @staticmethod
    def get_uvicorn_config():
        """获取Uvicorn性能配置"""
        return {
            "workers": PerformanceConfig.get_worker_count(),
            "worker_class": "uvicorn.workers.UvicornWorker",  # 适用于Gunicorn
            "worker_connections": 1000,
            "max_requests": 1000,
            "max_requests_jitter": 100,
            "timeout_requests": 300,
            "keep_alive": 5,
            "preload_app": True,
            "worker_tmp_dir": "/dev/shm",  # 使用内存临时目录
        }

# 在Dockerfile中使用
"""
CMD ["gunicorn", "main:app", 
     "--bind", "0.0.0.0:8000",
     "--workers", "${WORKERS:-4}",
     "--worker-class", "uvicorn.workers.UvicornWorker",
     "--worker-connections", "1000",
     "--max-requests", "1000",
     "--max-requests-jitter", "100",
     "--timeout", "300",
     "--keep-alive", "5",
     "--preload-app"]
"""

缓存优化

# cache_config.py - 缓存配置
from redis import asyncio as aioredis
from fastapi import FastAPI
import aiocache

class CacheManager:
    """缓存管理器"""
    
    def __init__(self, redis_url: str):
        self.redis_url = redis_url
        self.redis_client = None
    
    async def init_cache(self):
        """初始化缓存"""
        self.redis_client = await aioredis.from_url(
            self.redis_url,
            encoding="utf-8",
            decode_responses=True,
            max_connections=20,
            retry_on_timeout=True
        )
    
    async def get_cache(self, key: str):
        """获取缓存"""
        return await self.redis_client.get(key)
    
    async def set_cache(self, key: str, value: str, expire: int = 300):
        """设置缓存"""
        await self.redis_client.set(key, value, ex=expire)

# 在FastAPI应用中使用
"""
async def lifespan(app: FastAPI):
    cache_manager = CacheManager(settings.REDIS_URL)
    await cache_manager.init_cache()
    app.state.cache = cache_manager
    yield
    await cache_manager.redis_client.close()
"""

总结

Docker容器化部署是现代FastAPI应用的标准实践,它提供了:

  1. 环境一致性:开发、测试、生产环境完全一致
  2. 快速部署:秒级启动和停止
  3. 资源隔离:避免依赖冲突
  4. 弹性伸缩:根据负载自动扩缩容
  5. 安全增强:非root用户运行、安全配置
  6. 监控集成:健康检查、性能监控

💡 关键要点:使用多阶段构建优化镜像大小、非root用户提升安全性、健康检查确保服务可用性、CI/CD自动化部署。


SEO优化建议

为了提高这篇Docker容器化部署教程在搜索引擎中的排名,以下是几个关键的SEO优化建议:

标题优化

  • 主标题:使用包含核心关键词的标题,如"FastAPI Docker容器化部署完全指南"
  • 二级标题:每个章节标题都包含相关的长尾关键词
  • H1-H6层次结构:保持正确的标题层级,便于搜索引擎理解内容结构

内容优化

  • 关键词密度:在内容中自然地融入关键词如"Docker", "容器化", "FastAPI", "部署", "Dockerfile"等
  • 元描述:在文章开头的元数据中包含吸引人的描述
  • 内部链接:链接到其他相关教程,如Nginx与Gunicorn生产部署
  • 外部权威链接:引用官方文档和权威资源

技术SEO

  • 页面加载速度:优化代码块和图片加载
  • 移动端适配:确保在移动设备上良好显示
  • 结构化数据:使用适当的HTML标签和语义化元素

用户体验优化

  • 内容可读性:使用清晰的段落结构和代码示例
  • 互动元素:提供实际可运行的代码示例
  • 更新频率:定期更新内容以保持时效性

常见问题解答(FAQ)

Q1: Docker容器化部署相比传统部署有什么优势?

A: Docker容器化部署提供环境一致性、快速部署、资源隔离、可移植性和弹性伸缩等优势,解决了传统的环境不一致问题。

Q2: 如何优化Docker镜像大小?

A: 使用多阶段构建、选择轻量级基础镜像、删除不必要的依赖、使用.dockerignore文件排除无关文件等方法。

Q3: 什么是多阶段构建,为什么使用它?

A: 多阶段构建允许在一个Dockerfile中使用多个FROM指令,每个FROM指令可以使用不同的基础镜像,最终只复制需要的文件到最终镜像,从而减小镜像大小。

Q4: 如何确保Docker容器的安全性?

A: 使用非root用户运行容器、最小化基础镜像、定期更新依赖、限制容器权限、使用安全扫描工具等。

Q5: Docker Compose在生产环境中如何使用?

A: 在生产环境中使用docker-compose.prod.yml,配置生产级参数,使用环境变量管理敏感信息,设置健康检查和重启策略。


🔗 相关教程推荐

🏷️ 标签云: FastAPI部署 Docker容器化 Dockerfile Docker Compose 多阶段构建 生产部署 容器安全 镜像优化 CI/CD DevOps