feat: Setup KIND cluster for local Kubernetes development

- Created 5-node KIND cluster (1 control-plane + 4 workers)
- Configured NodePort mappings for console access (30080, 30081)
- Created namespace separation (site11-console, site11-pipeline)
- Deployed MongoDB and Redis in KIND cluster
- Deployed Console backend and frontend with NodePort services
- All 4 pods running successfully and verified with browser test

Infrastructure:
- k8s/kind-dev-cluster.yaml: 5-node cluster configuration
- k8s/kind/console-mongodb-redis.yaml: Database deployments
- k8s/kind/console-backend.yaml: Backend with NodePort
- k8s/kind/console-frontend.yaml: Frontend with NodePort

Management Tools:
- scripts/kind-setup.sh: Comprehensive cluster management script
- docker-compose.kubernetes.yml: Monitoring helper services

Documentation:
- KUBERNETES.md: Complete usage guide for developers
- docs/KIND_SETUP.md: Detailed KIND setup documentation
- docs/PROGRESS.md: Updated with KIND cluster completion

Console Services Access:
- Frontend: http://localhost:3000 (NodePort 30080)
- Backend: http://localhost:8000 (NodePort 30081)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
jungwoo choi
2025-10-28 18:28:36 +09:00
parent e60e531cdc
commit e008f17457
9 changed files with 1448 additions and 14 deletions

393
docs/KIND_SETUP.md Normal file
View File

@ -0,0 +1,393 @@
# KIND (Kubernetes IN Docker) 개발 환경 설정
## 개요
Docker Desktop의 Kubernetes 대신 KIND를 사용하여 개발 환경을 구성합니다.
### KIND 선택 이유
1. **독립성**: Docker Desktop Kubernetes와 별도로 관리
2. **재현성**: 설정 파일로 클러스터 구성 관리
3. **멀티 노드**: 실제 프로덕션과 유사한 멀티 노드 환경
4. **빠른 재시작**: 필요시 클러스터 삭제/재생성 용이
5. **리소스 관리**: 노드별 리소스 할당 가능
## 사전 요구사항
### 1. KIND 설치
```bash
# macOS (Homebrew)
brew install kind
# 또는 직접 다운로드
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-darwin-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
# 설치 확인
kind version
```
### 2. kubectl 설치
```bash
# macOS (Homebrew)
brew install kubectl
# 설치 확인
kubectl version --client
```
### 3. Docker 실행 확인
```bash
docker ps
# Docker가 실행 중이어야 합니다
```
## 클러스터 구성
### 5-Node 클러스터 설정 파일
파일 위치: `k8s/kind-dev-cluster.yaml`
```yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: site11-dev
# 노드 구성
nodes:
# Control Plane (마스터 노드)
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node-type=control-plane"
extraPortMappings:
# Console Frontend
- containerPort: 30080
hostPort: 3000
protocol: TCP
# Console Backend
- containerPort: 30081
hostPort: 8000
protocol: TCP
# Worker Node 1 (Console 서비스용)
- role: worker
labels:
workload: console
node-type: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "workload=console"
# Worker Node 2 (Pipeline 서비스용 - 수집)
- role: worker
labels:
workload: pipeline-collector
node-type: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "workload=pipeline-collector"
# Worker Node 3 (Pipeline 서비스용 - 처리)
- role: worker
labels:
workload: pipeline-processor
node-type: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "workload=pipeline-processor"
# Worker Node 4 (Pipeline 서비스용 - 생성)
- role: worker
labels:
workload: pipeline-generator
node-type: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "workload=pipeline-generator"
```
### 노드 역할 분담
- **Control Plane**: 클러스터 관리, API 서버
- **Worker 1 (console)**: Console Backend, Console Frontend
- **Worker 2 (pipeline-collector)**: RSS Collector, Google Search
- **Worker 3 (pipeline-processor)**: Translator
- **Worker 4 (pipeline-generator)**: AI Article Generator, Image Generator
## 클러스터 관리 명령어
### 클러스터 생성
```bash
# KIND 클러스터 생성
kind create cluster --config k8s/kind-dev-cluster.yaml
# 생성 확인
kubectl cluster-info --context kind-site11-dev
kubectl get nodes
```
### 클러스터 삭제
```bash
# 클러스터 삭제
kind delete cluster --name site11-dev
# 모든 KIND 클러스터 확인
kind get clusters
```
### 컨텍스트 전환
```bash
# KIND 클러스터로 전환
kubectl config use-context kind-site11-dev
# 현재 컨텍스트 확인
kubectl config current-context
# 모든 컨텍스트 보기
kubectl config get-contexts
```
## 서비스 배포
### 1. Namespace 생성
```bash
# Console namespace
kubectl create namespace site11-console
# Pipeline namespace
kubectl create namespace site11-pipeline
```
### 2. ConfigMap 및 Secret 배포
```bash
# Pipeline 설정
kubectl apply -f k8s/pipeline/configmap-dockerhub.yaml
```
### 3. 서비스 배포
```bash
# Console 서비스
kubectl apply -f k8s/console/console-backend.yaml
kubectl apply -f k8s/console/console-frontend.yaml
# Pipeline 서비스
kubectl apply -f k8s/pipeline/rss-collector-dockerhub.yaml
kubectl apply -f k8s/pipeline/google-search-dockerhub.yaml
kubectl apply -f k8s/pipeline/translator-dockerhub.yaml
kubectl apply -f k8s/pipeline/ai-article-generator-dockerhub.yaml
kubectl apply -f k8s/pipeline/image-generator-dockerhub.yaml
```
### 4. 배포 확인
```bash
# Pod 상태 확인
kubectl -n site11-console get pods -o wide
kubectl -n site11-pipeline get pods -o wide
# Service 확인
kubectl -n site11-console get svc
kubectl -n site11-pipeline get svc
# 노드별 Pod 분포 확인
kubectl get pods -A -o wide
```
## 접속 방법
### NodePort 방식 (권장)
KIND 클러스터는 NodePort를 통해 서비스를 노출합니다.
```yaml
# Console Frontend Service 예시
apiVersion: v1
kind: Service
metadata:
name: console-frontend
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30080 # http://localhost:3000
selector:
app: console-frontend
```
접속:
- Console Frontend: http://localhost:3000
- Console Backend: http://localhost:8000
### Port Forward 방식 (대안)
```bash
# Console Backend
kubectl -n site11-console port-forward svc/console-backend 8000:8000 &
# Console Frontend
kubectl -n site11-console port-forward svc/console-frontend 3000:80 &
```
## 모니터링
### 클러스터 상태
```bash
# 노드 상태
kubectl get nodes
# 전체 리소스
kubectl get all -A
# 특정 노드의 Pod
kubectl get pods -A -o wide | grep <node-name>
```
### 로그 확인
```bash
# Pod 로그
kubectl -n site11-console logs <pod-name>
# 실시간 로그
kubectl -n site11-console logs -f <pod-name>
# 이전 컨테이너 로그
kubectl -n site11-console logs <pod-name> --previous
```
### 리소스 사용량
```bash
# 노드 리소스
kubectl top nodes
# Pod 리소스
kubectl top pods -A
```
## 트러블슈팅
### 이미지 로드 문제
KIND는 로컬 이미지를 자동으로 로드하지 않습니다.
```bash
# 로컬 이미지를 KIND로 로드
kind load docker-image yakenator/site11-console-backend:latest --name site11-dev
kind load docker-image yakenator/site11-console-frontend:latest --name site11-dev
# 또는 imagePullPolicy: Always 사용 (Docker Hub에서 자동 pull)
```
### Pod가 시작하지 않는 경우
```bash
# Pod 상태 확인
kubectl -n site11-console describe pod <pod-name>
# 이벤트 확인
kubectl -n site11-console get events --sort-by='.lastTimestamp'
```
### 네트워크 문제
```bash
# Service endpoint 확인
kubectl -n site11-console get endpoints
# DNS 테스트
kubectl run -it --rm debug --image=busybox --restart=Never -- nslookup console-backend.site11-console.svc.cluster.local
```
## 개발 워크플로우
### 1. 코드 변경 후 재배포
```bash
# Docker 이미지 빌드
docker build -t yakenator/site11-console-backend:latest -f services/console/backend/Dockerfile services/console/backend
# Docker Hub에 푸시
docker push yakenator/site11-console-backend:latest
# Pod 재시작 (새 이미지 pull)
kubectl -n site11-console rollout restart deployment console-backend
# 또는 Pod 삭제 (자동 재생성)
kubectl -n site11-console delete pod -l app=console-backend
```
### 2. 로컬 개발 (빠른 테스트)
```bash
# 로컬에서 서비스 실행
cd services/console/backend
uvicorn app.main:app --reload --port 8000
# KIND 클러스터의 MongoDB 접속
kubectl -n site11-console port-forward svc/mongodb 27017:27017
```
### 3. 클러스터 리셋
```bash
# 전체 재생성
kind delete cluster --name site11-dev
kind create cluster --config k8s/kind-dev-cluster.yaml
# 서비스 재배포
kubectl apply -f k8s/console/
kubectl apply -f k8s/pipeline/
```
## 성능 최적화
### 노드 리소스 제한 (선택사항)
```yaml
nodes:
- role: worker
extraMounts:
- hostPath: /path/to/data
containerPath: /data
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
max-pods: "50"
cpu-manager-policy: "static"
```
### 이미지 Pull 정책
```yaml
# Deployment에서 설정
spec:
template:
spec:
containers:
- name: console-backend
image: yakenator/site11-console-backend:latest
imagePullPolicy: Always # 항상 최신 이미지
```
## 백업 및 복원
### 클러스터 설정 백업
```bash
# 현재 리소스 백업
kubectl get all -A -o yaml > backup-$(date +%Y%m%d).yaml
```
### 복원
```bash
# 백업에서 복원
kubectl apply -f backup-20251028.yaml
```
## 참고 자료
- KIND 공식 문서: https://kind.sigs.k8s.io/
- Kubernetes 공식 문서: https://kubernetes.io/docs/
- KIND GitHub: https://github.com/kubernetes-sigs/kind

View File

@ -6,7 +6,7 @@
## Current Status
- **Date Started**: 2025-09-09
- **Last Updated**: 2025-10-28
- **Current Phase**: Phase 2 In Progress 🔄 (Service Management CRUD - Backend Complete)
- **Current Phase**: KIND Cluster Setup Complete
- **Next Action**: Phase 2 - Frontend UI Implementation
## Completed Checkpoints
@ -130,6 +130,40 @@ All authentication endpoints tested and working:
- `/services/console/frontend/src/types/service.ts` - TypeScript types
- `/services/console/frontend/src/api/service.ts` - API client
### KIND Cluster Setup (Local Development Environment) ✅
**Completed Date**: 2025-10-28
#### Infrastructure Setup
✅ KIND (Kubernetes IN Docker) 5-node cluster
✅ Cluster configuration with role-based workers
✅ NodePort mappings for console access (30080, 30081)
✅ Namespace separation (site11-console, site11-pipeline)
✅ MongoDB and Redis deployed in cluster
✅ Console backend and frontend deployed with NodePort services
✅ All 4 pods running successfully
#### Management Tools
`kind-setup.sh` script for cluster management
`docker-compose.kubernetes.yml` for monitoring
✅ Comprehensive documentation (KUBERNETES.md, KIND_SETUP.md)
#### Kubernetes Resources Created
- **Cluster Config**: `/k8s/kind-dev-cluster.yaml`
- **Console MongoDB/Redis**: `/k8s/kind/console-mongodb-redis.yaml`
- **Console Backend**: `/k8s/kind/console-backend.yaml`
- **Console Frontend**: `/k8s/kind/console-frontend.yaml`
- **Management Script**: `/scripts/kind-setup.sh`
- **Docker Compose**: `/docker-compose.kubernetes.yml`
- **Documentation**: `/KUBERNETES.md`
#### Verification Results
✅ Cluster created with 5 nodes (all Ready)
✅ Console namespace with 4 running pods
✅ NodePort services accessible (3000, 8000)
✅ Frontend login/register tested successfully
✅ Backend API health check passed
✅ Authentication system working in KIND cluster
### Earlier Checkpoints
✅ Project structure planning (CLAUDE.md)
✅ Implementation plan created (docs/PLAN.md)
@ -151,27 +185,35 @@ All authentication endpoints tested and working:
## Deployment Status
### Kubernetes Cluster: site11-pipeline
### KIND Cluster: site11-dev ✅
**Cluster Created**: 2025-10-28
**Nodes**: 5 (1 control-plane + 4 workers)
```bash
# Backend
kubectl -n site11-pipeline get pods -l app=console-backend
# Status: 2/2 Running
# Console Namespace
kubectl -n site11-console get pods
# Status: 4/4 Running (mongodb, redis, console-backend, console-frontend)
# Frontend
kubectl -n site11-pipeline get pods -l app=console-frontend
# Status: 2/2 Running
# Cluster Status
./scripts/kind-setup.sh status
# Port Forwarding (for testing)
kubectl -n site11-pipeline port-forward svc/console-backend 8000:8000
kubectl -n site11-pipeline port-forward svc/console-frontend 3000:80
# Management
./scripts/kind-setup.sh {create|delete|deploy-console|status|logs|access|setup}
```
### Access URLs
- Frontend: http://localhost:3000 (via port-forward)
- Backend API: http://localhost:8000 (via port-forward)
### Access URLs (NodePort)
- Frontend: http://localhost:3000 (NodePort 30080)
- Backend API: http://localhost:8000 (NodePort 30081)
- Backend Health: http://localhost:8000/health
- API Docs: http://localhost:8000/docs
### Monitoring
```bash
# Start monitoring
docker-compose -f docker-compose.kubernetes.yml up -d
docker-compose -f docker-compose.kubernetes.yml logs -f kind-monitor
```
## Next Immediate Steps (Phase 2)
### Service Management CRUD