Files
mbart-translation/docker-compose.yml
jungwoo choi c8802cfc65 Initial commit: mBART Translation API with Docker support
- FastAPI 기반 다국어 번역 REST API 서비스
- mBART-50 모델을 사용한 18개 언어 지원
- Docker 및 Docker Compose 설정 포함
- GPU/CPU 지원
- 헬스 체크 및 API 문서 자동 생성
- 외부 접속 지원 (172.30.1.2:8000)

주요 파일:
- main.py: FastAPI 애플리케이션
- translator.py: mBART 번역 서비스
- models.py: Pydantic 데이터 모델
- config.py: 환경 설정
- Dockerfile: 최적화된 Docker 이미지
- docker-compose.yml: 간편한 배포 설정

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-10 09:57:19 +09:00

56 lines
1.4 KiB
YAML

version: '3.8'
services:
mbart-api:
build:
context: .
dockerfile: Dockerfile
container_name: mbart-translation-api
ports:
- "8000:8000"
environment:
- HOST=0.0.0.0
- PORT=8000
- MODEL_NAME=facebook/mbart-large-50-many-to-many-mmt
- MAX_LENGTH=512
- DEVICE=cpu # Change to 'cuda' for GPU support
volumes:
# Cache HuggingFace models to avoid re-downloading
- huggingface-cache:/home/appuser/.cache/huggingface
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
# GPU support (uncomment if you have NVIDIA GPU)
# mbart-api-gpu:
# build:
# context: .
# dockerfile: Dockerfile
# container_name: mbart-translation-api-gpu
# ports:
# - "8000:8000"
# environment:
# - HOST=0.0.0.0
# - PORT=8000
# - MODEL_NAME=facebook/mbart-large-50-many-to-many-mmt
# - MAX_LENGTH=512
# - DEVICE=cuda
# volumes:
# - huggingface-cache:/home/appuser/.cache/huggingface
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: 1
# capabilities: [gpu]
# restart: unless-stopped
volumes:
huggingface-cache:
driver: local