feat: Setup KIND cluster for local Kubernetes development

- Created 5-node KIND cluster (1 control-plane + 4 workers)
- Configured NodePort mappings for console access (30080, 30081)
- Created namespace separation (site11-console, site11-pipeline)
- Deployed MongoDB and Redis in KIND cluster
- Deployed Console backend and frontend with NodePort services
- All 4 pods running successfully and verified with browser test

Infrastructure:
- k8s/kind-dev-cluster.yaml: 5-node cluster configuration
- k8s/kind/console-mongodb-redis.yaml: Database deployments
- k8s/kind/console-backend.yaml: Backend with NodePort
- k8s/kind/console-frontend.yaml: Frontend with NodePort

Management Tools:
- scripts/kind-setup.sh: Comprehensive cluster management script
- docker-compose.kubernetes.yml: Monitoring helper services

Documentation:
- KUBERNETES.md: Complete usage guide for developers
- docs/KIND_SETUP.md: Detailed KIND setup documentation
- docs/PROGRESS.md: Updated with KIND cluster completion

Console Services Access:
- Frontend: http://localhost:3000 (NodePort 30080)
- Backend: http://localhost:8000 (NodePort 30081)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
jungwoo choi
2025-10-28 18:28:36 +09:00
parent e60e531cdc
commit e008f17457
9 changed files with 1448 additions and 14 deletions

339
KUBERNETES.md Normal file
View File

@ -0,0 +1,339 @@
# Kubernetes Development Environment (KIND)
Site11 프로젝트는 KIND (Kubernetes IN Docker)를 사용하여 로컬 Kubernetes 개발 환경을 구성합니다.
## 목차
- [사전 요구사항](#사전-요구사항)
- [빠른 시작](#빠른-시작)
- [관리 방법](#관리-방법)
- [접속 정보](#접속-정보)
- [문제 해결](#문제-해결)
## 사전 요구사항
다음 도구들이 설치되어 있어야 합니다:
```bash
# Docker Desktop
brew install --cask docker
# KIND
brew install kind
# kubectl
brew install kubectl
```
## 빠른 시작
### 방법 1: 스크립트 사용 (권장)
```bash
# 전체 환경 한번에 설정 (클러스터 생성 + 서비스 배포)
./scripts/kind-setup.sh setup
# 상태 확인
./scripts/kind-setup.sh status
# 접속 정보 확인
./scripts/kind-setup.sh access
```
### 방법 2: docker-compose 사용
```bash
# 헬퍼 컨테이너 시작 (모니터링 포함)
docker-compose -f docker-compose.kubernetes.yml up -d
# 모니터링 로그 확인
docker-compose -f docker-compose.kubernetes.yml logs -f kind-monitor
# 헬퍼 컨테이너 중지
docker-compose -f docker-compose.kubernetes.yml down
```
**참고**: KIND 클러스터 자체는 여전히 `kind` CLI 또는 스크립트로 관리해야 합니다. docker-compose는 모니터링 및 관리 헬퍼만 제공합니다.
### 방법 3: 수동 설정
```bash
# 1. 클러스터 생성
kind create cluster --config k8s/kind-dev-cluster.yaml
# 2. 네임스페이스 생성
kubectl create namespace site11-console
kubectl create namespace site11-pipeline
# 3. Docker 이미지 로드
kind load docker-image yakenator/site11-console-backend:latest --name site11-dev
kind load docker-image yakenator/site11-console-frontend:latest --name site11-dev
# 4. 서비스 배포
kubectl apply -f k8s/kind/console-mongodb-redis.yaml
kubectl apply -f k8s/kind/console-backend.yaml
kubectl apply -f k8s/kind/console-frontend.yaml
# 5. 상태 확인
kubectl get pods -n site11-console
```
## 관리 방법
### 스크립트 명령어
```bash
# 클러스터 생성
./scripts/kind-setup.sh create
# 클러스터 삭제
./scripts/kind-setup.sh delete
# 네임스페이스 생성
./scripts/kind-setup.sh deploy-namespaces
# Docker 이미지 로드
./scripts/kind-setup.sh load-images
# Console 서비스 배포
./scripts/kind-setup.sh deploy-console
# 상태 확인
./scripts/kind-setup.sh status
# Pod 로그 확인
./scripts/kind-setup.sh logs site11-console [pod-name]
# 접속 정보 표시
./scripts/kind-setup.sh access
```
### kubectl 명령어
```bash
# 전체 리소스 확인
kubectl get all -n site11-console
# Pod 상세 정보
kubectl describe pod <pod-name> -n site11-console
# Pod 로그 확인
kubectl logs <pod-name> -n site11-console -f
# Pod 내부 접속
kubectl exec -it <pod-name> -n site11-console -- /bin/bash
# 서비스 확인
kubectl get svc -n site11-console
# 노드 확인
kubectl get nodes
```
## 클러스터 구성
### 노드 구성 (5 노드)
- **Control Plane (1개)**: 클러스터 마스터 노드
- NodePort 매핑: 30080 → 3000 (Frontend), 30081 → 8000 (Backend)
- **Worker Nodes (4개)**:
- `workload=console`: Console 서비스 전용
- `workload=pipeline-collector`: 데이터 수집 서비스
- `workload=pipeline-processor`: 데이터 처리 서비스
- `workload=pipeline-generator`: 콘텐츠 생성 서비스
### 네임스페이스
- `site11-console`: Console 프론트엔드/백엔드, MongoDB, Redis
- `site11-pipeline`: Pipeline 관련 서비스들
## 접속 정보
### Console Services
- **Frontend**: http://localhost:3000
- NodePort: 30080
- 컨테이너 포트: 80
- **Backend**: http://localhost:8000
- NodePort: 30081
- 컨테이너 포트: 8000
### 내부 서비스 (Pod 내부에서만 접근 가능)
- **MongoDB**: `mongodb://mongodb:27017`
- **Redis**: `redis://redis:6379`
## 개발 워크플로우
### 1. 코드 변경 후 배포
```bash
# 1. Docker 이미지 빌드
docker build -t yakenator/site11-console-backend:latest \
-f services/console/backend/Dockerfile \
services/console/backend
# 2. KIND 클러스터에 이미지 로드
kind load docker-image yakenator/site11-console-backend:latest --name site11-dev
# 3. Pod 재시작
kubectl rollout restart deployment/console-backend -n site11-console
# 4. 배포 상태 확인
kubectl rollout status deployment/console-backend -n site11-console
```
### 2. 스크립트로 간편하게
```bash
# 이미지 빌드 후 로드
./scripts/kind-setup.sh load-images
# 배포 재시작
kubectl rollout restart deployment/console-backend -n site11-console
kubectl rollout restart deployment/console-frontend -n site11-console
```
### 3. 전체 재배포
```bash
# 클러스터 삭제 후 재생성
./scripts/kind-setup.sh delete
./scripts/kind-setup.sh setup
```
## 모니터링
### docker-compose 모니터링 사용
```bash
# 모니터링 시작
docker-compose -f docker-compose.kubernetes.yml up -d
# 실시간 로그 확인 (30초마다 업데이트)
docker-compose -f docker-compose.kubernetes.yml logs -f kind-monitor
```
모니터링 컨테이너는 다음 정보를 30초마다 출력합니다:
- 노드 상태
- Console 네임스페이스 Pod 상태
- Pipeline 네임스페이스 Pod 상태
## 문제 해결
### Pod이 시작되지 않는 경우
```bash
# Pod 상태 확인
kubectl get pods -n site11-console
# Pod 상세 정보 확인
kubectl describe pod <pod-name> -n site11-console
# Pod 로그 확인
kubectl logs <pod-name> -n site11-console
```
### 이미지 Pull 에러
```bash
# 로컬 이미지 확인
docker images | grep site11
# 이미지가 없으면 빌드
docker build -t yakenator/site11-console-backend:latest \
-f services/console/backend/Dockerfile \
services/console/backend
# KIND에 이미지 로드
kind load docker-image yakenator/site11-console-backend:latest --name site11-dev
```
### NodePort 접속 불가
```bash
# 서비스 확인
kubectl get svc -n site11-console
# NodePort 확인 (30080, 30081이어야 함)
kubectl describe svc console-frontend -n site11-console
kubectl describe svc console-backend -n site11-console
# 포트 포워딩 대안 (문제가 계속되면)
kubectl port-forward svc/console-frontend 3000:3000 -n site11-console
kubectl port-forward svc/console-backend 8000:8000 -n site11-console
```
### 클러스터 완전 초기화
```bash
# KIND 클러스터 삭제
kind delete cluster --name site11-dev
# Docker 네트워크 정리 (필요시)
docker network prune -f
# 클러스터 재생성
./scripts/kind-setup.sh setup
```
### MongoDB 연결 실패
```bash
# MongoDB Pod 확인
kubectl get pod -n site11-console -l app=mongodb
# MongoDB 로그 확인
kubectl logs -n site11-console -l app=mongodb
# MongoDB 서비스 확인
kubectl get svc mongodb -n site11-console
# Pod 내에서 연결 테스트
kubectl exec -it <console-backend-pod> -n site11-console -- \
curl mongodb:27017
```
## 참고 문서
- [KIND 공식 문서](https://kind.sigs.k8s.io/)
- [Kubernetes 공식 문서](https://kubernetes.io/docs/)
- [KIND 설정 가이드](./docs/KIND_SETUP.md)
## 유용한 팁
### kubectl 자동완성 설정
```bash
# Bash
echo 'source <(kubectl completion bash)' >>~/.bashrc
# Zsh
echo 'source <(kubectl completion zsh)' >>~/.zshrc
```
### kubectl 단축어 설정
```bash
# ~/.bashrc 또는 ~/.zshrc에 추가
alias k='kubectl'
alias kgp='kubectl get pods'
alias kgs='kubectl get svc'
alias kgn='kubectl get nodes'
alias kl='kubectl logs'
alias kd='kubectl describe'
```
### Context 빠른 전환
```bash
# 현재 context 확인
kubectl config current-context
# KIND context로 전환
kubectl config use-context kind-site11-dev
# 기본 namespace 설정
kubectl config set-context --current --namespace=site11-console
```

View File

@ -0,0 +1,112 @@
version: '3.8'
# KIND (Kubernetes IN Docker) 클러스터 관리용 docker-compose
# 사용법:
# 시작: docker-compose -f docker-compose.kubernetes.yml up -d
# 중지: docker-compose -f docker-compose.kubernetes.yml down
# 재시작: docker-compose -f docker-compose.kubernetes.yml restart
# 로그: docker-compose -f docker-compose.kubernetes.yml logs -f
services:
# KIND 클러스터 관리 헬퍼 서비스
# KIND 클러스터는 docker-compose로 직접 제어할 수 없으므로
# 이 헬퍼 서비스를 통해 관리 작업을 수행합니다
kind-manager:
image: docker/compose:latest
container_name: site11-kind-manager
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./k8s:/k8s:ro
- ./scripts:/scripts:ro
networks:
- kind
entrypoint: /bin/sh
command: |
-c "
echo '====================================';
echo 'Site11 KIND Cluster Manager';
echo '====================================';
echo '';
echo 'KIND 클러스터 관리 명령어:';
echo ' 클러스터 생성: kind create cluster --config /k8s/kind-dev-cluster.yaml';
echo ' 클러스터 삭제: kind delete cluster --name site11-dev';
echo ' 클러스터 상태: kubectl cluster-info --context kind-site11-dev';
echo ' 노드 확인: kubectl get nodes';
echo '';
echo 'Services 배포:';
echo ' 네임스페이스: kubectl create namespace site11-console';
echo ' Console: kubectl apply -f /k8s/kind/';
echo '';
echo '헬퍼 컨테이너는 계속 실행됩니다...';
tail -f /dev/null
"
restart: unless-stopped
# Kubectl 명령어 실행을 위한 헬퍼 서비스
kubectl:
image: bitnami/kubectl:latest
container_name: site11-kubectl
volumes:
- ~/.kube:/root/.kube:ro
- ./k8s:/k8s:ro
networks:
- kind
entrypoint: /bin/bash
command: |
-c "
echo '====================================';
echo 'Kubectl Helper Container';
echo '====================================';
echo '';
echo 'kubectl 명령어 사용 가능';
echo ' 예시: docker exec site11-kubectl kubectl get pods -A';
echo '';
tail -f /dev/null
"
restart: unless-stopped
# KIND 클러스터 헬스체크 및 모니터링
kind-monitor:
image: bitnami/kubectl:latest
container_name: site11-kind-monitor
volumes:
- ~/.kube:/root/.kube:ro
networks:
- kind
entrypoint: /bin/bash
command: |
-c "
while true; do
echo '==== KIND Cluster Status ====';
kubectl get nodes --context kind-site11-dev 2>/dev/null || echo 'Cluster not running';
echo '';
echo '==== Console Namespace Pods ====';
kubectl get pods -n site11-console --context kind-site11-dev 2>/dev/null || echo 'Namespace not found';
echo '';
echo '==== Pipeline Namespace Pods ====';
kubectl get pods -n site11-pipeline --context kind-site11-dev 2>/dev/null || echo 'Namespace not found';
echo '';
sleep 30;
done
"
restart: unless-stopped
networks:
kind:
name: kind
driver: bridge
# 참고:
# 1. KIND 클러스터 자체는 docker-compose로 직접 제어되지 않습니다
# 2. 이 파일은 KIND 클러스터 관리를 위한 헬퍼 컨테이너들을 제공합니다
# 3. 실제 클러스터 생성/삭제는 kind CLI를 사용해야 합니다
#
# KIND 클러스터 라이프사이클:
# 생성: kind create cluster --config k8s/kind-dev-cluster.yaml
# 삭제: kind delete cluster --name site11-dev
# 목록: kind get clusters
#
# docker-compose 명령어:
# 헬퍼 시작: docker-compose -f docker-compose.kubernetes.yml up -d
# 헬퍼 중지: docker-compose -f docker-compose.kubernetes.yml down
# 로그 확인: docker-compose -f docker-compose.kubernetes.yml logs -f kind-monitor

393
docs/KIND_SETUP.md Normal file
View File

@ -0,0 +1,393 @@
# KIND (Kubernetes IN Docker) 개발 환경 설정
## 개요
Docker Desktop의 Kubernetes 대신 KIND를 사용하여 개발 환경을 구성합니다.
### KIND 선택 이유
1. **독립성**: Docker Desktop Kubernetes와 별도로 관리
2. **재현성**: 설정 파일로 클러스터 구성 관리
3. **멀티 노드**: 실제 프로덕션과 유사한 멀티 노드 환경
4. **빠른 재시작**: 필요시 클러스터 삭제/재생성 용이
5. **리소스 관리**: 노드별 리소스 할당 가능
## 사전 요구사항
### 1. KIND 설치
```bash
# macOS (Homebrew)
brew install kind
# 또는 직접 다운로드
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-darwin-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
# 설치 확인
kind version
```
### 2. kubectl 설치
```bash
# macOS (Homebrew)
brew install kubectl
# 설치 확인
kubectl version --client
```
### 3. Docker 실행 확인
```bash
docker ps
# Docker가 실행 중이어야 합니다
```
## 클러스터 구성
### 5-Node 클러스터 설정 파일
파일 위치: `k8s/kind-dev-cluster.yaml`
```yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: site11-dev
# 노드 구성
nodes:
# Control Plane (마스터 노드)
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node-type=control-plane"
extraPortMappings:
# Console Frontend
- containerPort: 30080
hostPort: 3000
protocol: TCP
# Console Backend
- containerPort: 30081
hostPort: 8000
protocol: TCP
# Worker Node 1 (Console 서비스용)
- role: worker
labels:
workload: console
node-type: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "workload=console"
# Worker Node 2 (Pipeline 서비스용 - 수집)
- role: worker
labels:
workload: pipeline-collector
node-type: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "workload=pipeline-collector"
# Worker Node 3 (Pipeline 서비스용 - 처리)
- role: worker
labels:
workload: pipeline-processor
node-type: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "workload=pipeline-processor"
# Worker Node 4 (Pipeline 서비스용 - 생성)
- role: worker
labels:
workload: pipeline-generator
node-type: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "workload=pipeline-generator"
```
### 노드 역할 분담
- **Control Plane**: 클러스터 관리, API 서버
- **Worker 1 (console)**: Console Backend, Console Frontend
- **Worker 2 (pipeline-collector)**: RSS Collector, Google Search
- **Worker 3 (pipeline-processor)**: Translator
- **Worker 4 (pipeline-generator)**: AI Article Generator, Image Generator
## 클러스터 관리 명령어
### 클러스터 생성
```bash
# KIND 클러스터 생성
kind create cluster --config k8s/kind-dev-cluster.yaml
# 생성 확인
kubectl cluster-info --context kind-site11-dev
kubectl get nodes
```
### 클러스터 삭제
```bash
# 클러스터 삭제
kind delete cluster --name site11-dev
# 모든 KIND 클러스터 확인
kind get clusters
```
### 컨텍스트 전환
```bash
# KIND 클러스터로 전환
kubectl config use-context kind-site11-dev
# 현재 컨텍스트 확인
kubectl config current-context
# 모든 컨텍스트 보기
kubectl config get-contexts
```
## 서비스 배포
### 1. Namespace 생성
```bash
# Console namespace
kubectl create namespace site11-console
# Pipeline namespace
kubectl create namespace site11-pipeline
```
### 2. ConfigMap 및 Secret 배포
```bash
# Pipeline 설정
kubectl apply -f k8s/pipeline/configmap-dockerhub.yaml
```
### 3. 서비스 배포
```bash
# Console 서비스
kubectl apply -f k8s/console/console-backend.yaml
kubectl apply -f k8s/console/console-frontend.yaml
# Pipeline 서비스
kubectl apply -f k8s/pipeline/rss-collector-dockerhub.yaml
kubectl apply -f k8s/pipeline/google-search-dockerhub.yaml
kubectl apply -f k8s/pipeline/translator-dockerhub.yaml
kubectl apply -f k8s/pipeline/ai-article-generator-dockerhub.yaml
kubectl apply -f k8s/pipeline/image-generator-dockerhub.yaml
```
### 4. 배포 확인
```bash
# Pod 상태 확인
kubectl -n site11-console get pods -o wide
kubectl -n site11-pipeline get pods -o wide
# Service 확인
kubectl -n site11-console get svc
kubectl -n site11-pipeline get svc
# 노드별 Pod 분포 확인
kubectl get pods -A -o wide
```
## 접속 방법
### NodePort 방식 (권장)
KIND 클러스터는 NodePort를 통해 서비스를 노출합니다.
```yaml
# Console Frontend Service 예시
apiVersion: v1
kind: Service
metadata:
name: console-frontend
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30080 # http://localhost:3000
selector:
app: console-frontend
```
접속:
- Console Frontend: http://localhost:3000
- Console Backend: http://localhost:8000
### Port Forward 방식 (대안)
```bash
# Console Backend
kubectl -n site11-console port-forward svc/console-backend 8000:8000 &
# Console Frontend
kubectl -n site11-console port-forward svc/console-frontend 3000:80 &
```
## 모니터링
### 클러스터 상태
```bash
# 노드 상태
kubectl get nodes
# 전체 리소스
kubectl get all -A
# 특정 노드의 Pod
kubectl get pods -A -o wide | grep <node-name>
```
### 로그 확인
```bash
# Pod 로그
kubectl -n site11-console logs <pod-name>
# 실시간 로그
kubectl -n site11-console logs -f <pod-name>
# 이전 컨테이너 로그
kubectl -n site11-console logs <pod-name> --previous
```
### 리소스 사용량
```bash
# 노드 리소스
kubectl top nodes
# Pod 리소스
kubectl top pods -A
```
## 트러블슈팅
### 이미지 로드 문제
KIND는 로컬 이미지를 자동으로 로드하지 않습니다.
```bash
# 로컬 이미지를 KIND로 로드
kind load docker-image yakenator/site11-console-backend:latest --name site11-dev
kind load docker-image yakenator/site11-console-frontend:latest --name site11-dev
# 또는 imagePullPolicy: Always 사용 (Docker Hub에서 자동 pull)
```
### Pod가 시작하지 않는 경우
```bash
# Pod 상태 확인
kubectl -n site11-console describe pod <pod-name>
# 이벤트 확인
kubectl -n site11-console get events --sort-by='.lastTimestamp'
```
### 네트워크 문제
```bash
# Service endpoint 확인
kubectl -n site11-console get endpoints
# DNS 테스트
kubectl run -it --rm debug --image=busybox --restart=Never -- nslookup console-backend.site11-console.svc.cluster.local
```
## 개발 워크플로우
### 1. 코드 변경 후 재배포
```bash
# Docker 이미지 빌드
docker build -t yakenator/site11-console-backend:latest -f services/console/backend/Dockerfile services/console/backend
# Docker Hub에 푸시
docker push yakenator/site11-console-backend:latest
# Pod 재시작 (새 이미지 pull)
kubectl -n site11-console rollout restart deployment console-backend
# 또는 Pod 삭제 (자동 재생성)
kubectl -n site11-console delete pod -l app=console-backend
```
### 2. 로컬 개발 (빠른 테스트)
```bash
# 로컬에서 서비스 실행
cd services/console/backend
uvicorn app.main:app --reload --port 8000
# KIND 클러스터의 MongoDB 접속
kubectl -n site11-console port-forward svc/mongodb 27017:27017
```
### 3. 클러스터 리셋
```bash
# 전체 재생성
kind delete cluster --name site11-dev
kind create cluster --config k8s/kind-dev-cluster.yaml
# 서비스 재배포
kubectl apply -f k8s/console/
kubectl apply -f k8s/pipeline/
```
## 성능 최적화
### 노드 리소스 제한 (선택사항)
```yaml
nodes:
- role: worker
extraMounts:
- hostPath: /path/to/data
containerPath: /data
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
max-pods: "50"
cpu-manager-policy: "static"
```
### 이미지 Pull 정책
```yaml
# Deployment에서 설정
spec:
template:
spec:
containers:
- name: console-backend
image: yakenator/site11-console-backend:latest
imagePullPolicy: Always # 항상 최신 이미지
```
## 백업 및 복원
### 클러스터 설정 백업
```bash
# 현재 리소스 백업
kubectl get all -A -o yaml > backup-$(date +%Y%m%d).yaml
```
### 복원
```bash
# 백업에서 복원
kubectl apply -f backup-20251028.yaml
```
## 참고 자료
- KIND 공식 문서: https://kind.sigs.k8s.io/
- Kubernetes 공식 문서: https://kubernetes.io/docs/
- KIND GitHub: https://github.com/kubernetes-sigs/kind

View File

@ -6,7 +6,7 @@
## Current Status
- **Date Started**: 2025-09-09
- **Last Updated**: 2025-10-28
- **Current Phase**: Phase 2 In Progress 🔄 (Service Management CRUD - Backend Complete)
- **Current Phase**: KIND Cluster Setup Complete
- **Next Action**: Phase 2 - Frontend UI Implementation
## Completed Checkpoints
@ -130,6 +130,40 @@ All authentication endpoints tested and working:
- `/services/console/frontend/src/types/service.ts` - TypeScript types
- `/services/console/frontend/src/api/service.ts` - API client
### KIND Cluster Setup (Local Development Environment) ✅
**Completed Date**: 2025-10-28
#### Infrastructure Setup
✅ KIND (Kubernetes IN Docker) 5-node cluster
✅ Cluster configuration with role-based workers
✅ NodePort mappings for console access (30080, 30081)
✅ Namespace separation (site11-console, site11-pipeline)
✅ MongoDB and Redis deployed in cluster
✅ Console backend and frontend deployed with NodePort services
✅ All 4 pods running successfully
#### Management Tools
`kind-setup.sh` script for cluster management
`docker-compose.kubernetes.yml` for monitoring
✅ Comprehensive documentation (KUBERNETES.md, KIND_SETUP.md)
#### Kubernetes Resources Created
- **Cluster Config**: `/k8s/kind-dev-cluster.yaml`
- **Console MongoDB/Redis**: `/k8s/kind/console-mongodb-redis.yaml`
- **Console Backend**: `/k8s/kind/console-backend.yaml`
- **Console Frontend**: `/k8s/kind/console-frontend.yaml`
- **Management Script**: `/scripts/kind-setup.sh`
- **Docker Compose**: `/docker-compose.kubernetes.yml`
- **Documentation**: `/KUBERNETES.md`
#### Verification Results
✅ Cluster created with 5 nodes (all Ready)
✅ Console namespace with 4 running pods
✅ NodePort services accessible (3000, 8000)
✅ Frontend login/register tested successfully
✅ Backend API health check passed
✅ Authentication system working in KIND cluster
### Earlier Checkpoints
✅ Project structure planning (CLAUDE.md)
✅ Implementation plan created (docs/PLAN.md)
@ -151,27 +185,35 @@ All authentication endpoints tested and working:
## Deployment Status
### Kubernetes Cluster: site11-pipeline
### KIND Cluster: site11-dev ✅
**Cluster Created**: 2025-10-28
**Nodes**: 5 (1 control-plane + 4 workers)
```bash
# Backend
kubectl -n site11-pipeline get pods -l app=console-backend
# Status: 2/2 Running
# Console Namespace
kubectl -n site11-console get pods
# Status: 4/4 Running (mongodb, redis, console-backend, console-frontend)
# Frontend
kubectl -n site11-pipeline get pods -l app=console-frontend
# Status: 2/2 Running
# Cluster Status
./scripts/kind-setup.sh status
# Port Forwarding (for testing)
kubectl -n site11-pipeline port-forward svc/console-backend 8000:8000
kubectl -n site11-pipeline port-forward svc/console-frontend 3000:80
# Management
./scripts/kind-setup.sh {create|delete|deploy-console|status|logs|access|setup}
```
### Access URLs
- Frontend: http://localhost:3000 (via port-forward)
- Backend API: http://localhost:8000 (via port-forward)
### Access URLs (NodePort)
- Frontend: http://localhost:3000 (NodePort 30080)
- Backend API: http://localhost:8000 (NodePort 30081)
- Backend Health: http://localhost:8000/health
- API Docs: http://localhost:8000/docs
### Monitoring
```bash
# Start monitoring
docker-compose -f docker-compose.kubernetes.yml up -d
docker-compose -f docker-compose.kubernetes.yml logs -f kind-monitor
```
## Next Immediate Steps (Phase 2)
### Service Management CRUD

71
k8s/kind-dev-cluster.yaml Normal file
View File

@ -0,0 +1,71 @@
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: site11-dev
# 노드 구성 (1 Control Plane + 4 Workers = 5 Nodes)
nodes:
# Control Plane (마스터 노드)
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node-type=control-plane"
extraPortMappings:
# Console Frontend
- containerPort: 30080
hostPort: 3000
protocol: TCP
# Console Backend
- containerPort: 30081
hostPort: 8000
protocol: TCP
# Worker Node 1 (Console 서비스용)
- role: worker
labels:
workload: console
node-type: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "workload=console"
# Worker Node 2 (Pipeline 서비스용 - 수집)
- role: worker
labels:
workload: pipeline-collector
node-type: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "workload=pipeline-collector"
# Worker Node 3 (Pipeline 서비스용 - 처리)
- role: worker
labels:
workload: pipeline-processor
node-type: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "workload=pipeline-processor"
# Worker Node 4 (Pipeline 서비스용 - 생성)
- role: worker
labels:
workload: pipeline-generator
node-type: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "workload=pipeline-generator"

View File

@ -0,0 +1,79 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: console-backend
namespace: site11-console
labels:
app: console-backend
spec:
replicas: 1
selector:
matchLabels:
app: console-backend
template:
metadata:
labels:
app: console-backend
spec:
nodeSelector:
workload: console
containers:
- name: console-backend
image: yakenator/site11-console-backend:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
protocol: TCP
env:
- name: ENV
value: "development"
- name: DEBUG
value: "true"
- name: MONGODB_URL
value: "mongodb://mongodb:27017"
- name: DB_NAME
value: "console_db"
- name: REDIS_URL
value: "redis://redis:6379"
- name: JWT_SECRET_KEY
value: "dev-secret-key-please-change-in-production"
- name: JWT_ALGORITHM
value: "HS256"
- name: ACCESS_TOKEN_EXPIRE_MINUTES
value: "30"
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
readinessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 10
periodSeconds: 5
livenessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 30
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: console-backend
namespace: site11-console
labels:
app: console-backend
spec:
type: NodePort
selector:
app: console-backend
ports:
- port: 8000
targetPort: 8000
nodePort: 30081
protocol: TCP

View File

@ -0,0 +1,65 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: console-frontend
namespace: site11-console
labels:
app: console-frontend
spec:
replicas: 1
selector:
matchLabels:
app: console-frontend
template:
metadata:
labels:
app: console-frontend
spec:
nodeSelector:
workload: console
containers:
- name: console-frontend
image: yakenator/site11-console-frontend:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
protocol: TCP
env:
- name: VITE_API_URL
value: "http://localhost:8000"
resources:
requests:
memory: "128Mi"
cpu: "50m"
limits:
memory: "256Mi"
cpu: "200m"
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 15
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: console-frontend
namespace: site11-console
labels:
app: console-frontend
spec:
type: NodePort
selector:
app: console-frontend
ports:
- port: 3000
targetPort: 80
nodePort: 30080
protocol: TCP

View File

@ -0,0 +1,108 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb
namespace: site11-console
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo:7.0
ports:
- containerPort: 27017
protocol: TCP
env:
- name: MONGO_INITDB_DATABASE
value: "console_db"
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
volumeMounts:
- name: mongodb-data
mountPath: /data/db
volumes:
- name: mongodb-data
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: mongodb
namespace: site11-console
labels:
app: mongodb
spec:
type: ClusterIP
selector:
app: mongodb
ports:
- port: 27017
targetPort: 27017
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: site11-console
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:7-alpine
ports:
- containerPort: 6379
protocol: TCP
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
volumeMounts:
- name: redis-data
mountPath: /data
volumes:
- name: redis-data
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: site11-console
labels:
app: redis
spec:
type: ClusterIP
selector:
app: redis
ports:
- port: 6379
targetPort: 6379
protocol: TCP

225
scripts/kind-setup.sh Executable file
View File

@ -0,0 +1,225 @@
#!/bin/bash
# Site11 KIND Cluster Setup Script
# This script manages the KIND (Kubernetes IN Docker) development cluster
set -e
CLUSTER_NAME="site11-dev"
CONFIG_FILE="k8s/kind-dev-cluster.yaml"
K8S_DIR="k8s/kind"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
echo -e "${GREEN}=====================================${NC}"
echo -e "${GREEN}Site11 KIND Cluster Manager${NC}"
echo -e "${GREEN}=====================================${NC}"
echo ""
# Check if KIND is installed
if ! command -v kind &> /dev/null; then
echo -e "${RED}ERROR: kind is not installed${NC}"
echo "Please install KIND: https://kind.sigs.k8s.io/docs/user/quick-start/#installation"
exit 1
fi
# Check if kubectl is installed
if ! command -v kubectl &> /dev/null; then
echo -e "${RED}ERROR: kubectl is not installed${NC}"
echo "Please install kubectl: https://kubernetes.io/docs/tasks/tools/"
exit 1
fi
# Function to create cluster
create_cluster() {
echo -e "${YELLOW}Creating KIND cluster: $CLUSTER_NAME${NC}"
if kind get clusters | grep -q "^$CLUSTER_NAME$"; then
echo -e "${YELLOW}Cluster $CLUSTER_NAME already exists${NC}"
read -p "Do you want to delete and recreate? (y/N): " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
delete_cluster
else
echo "Skipping cluster creation"
return
fi
fi
kind create cluster --config "$CONFIG_FILE"
echo -e "${GREEN}✅ Cluster created successfully${NC}"
# Wait for cluster to be ready
echo "Waiting for cluster to be ready..."
kubectl wait --for=condition=Ready nodes --all --timeout=120s
echo -e "${GREEN}✅ All nodes are ready${NC}"
}
# Function to delete cluster
delete_cluster() {
echo -e "${YELLOW}Deleting KIND cluster: $CLUSTER_NAME${NC}"
kind delete cluster --name "$CLUSTER_NAME"
echo -e "${GREEN}✅ Cluster deleted${NC}"
}
# Function to deploy namespaces
deploy_namespaces() {
echo -e "${YELLOW}Creating namespaces${NC}"
kubectl create namespace site11-console --dry-run=client -o yaml | kubectl apply -f -
kubectl create namespace site11-pipeline --dry-run=client -o yaml | kubectl apply -f -
echo -e "${GREEN}✅ Namespaces created${NC}"
}
# Function to load images
load_images() {
echo -e "${YELLOW}Loading Docker images into KIND cluster${NC}"
images=(
"yakenator/site11-console-backend:latest"
"yakenator/site11-console-frontend:latest"
)
for image in "${images[@]}"; do
echo "Loading $image..."
if docker image inspect "$image" &> /dev/null; then
kind load docker-image "$image" --name "$CLUSTER_NAME"
else
echo -e "${YELLOW}Warning: Image $image not found locally, skipping${NC}"
fi
done
echo -e "${GREEN}✅ Images loaded${NC}"
}
# Function to deploy console services
deploy_console() {
echo -e "${YELLOW}Deploying Console services${NC}"
# Deploy in order: databases first, then applications
kubectl apply -f "$K8S_DIR/console-mongodb-redis.yaml"
echo "Waiting for databases to be ready..."
sleep 5
kubectl apply -f "$K8S_DIR/console-backend.yaml"
kubectl apply -f "$K8S_DIR/console-frontend.yaml"
echo -e "${GREEN}✅ Console services deployed${NC}"
}
# Function to check cluster status
status() {
echo -e "${YELLOW}Cluster Status${NC}"
echo ""
if ! kind get clusters | grep -q "^$CLUSTER_NAME$"; then
echo -e "${RED}Cluster $CLUSTER_NAME does not exist${NC}"
return 1
fi
echo "=== Nodes ==="
kubectl get nodes
echo ""
echo "=== Console Namespace Pods ==="
kubectl get pods -n site11-console -o wide
echo ""
echo "=== Console Services ==="
kubectl get svc -n site11-console
echo ""
echo "=== Pipeline Namespace Pods ==="
kubectl get pods -n site11-pipeline -o wide 2>/dev/null || echo "No pods found"
echo ""
}
# Function to show logs
logs() {
namespace=${1:-site11-console}
pod_name=${2:-}
if [ -z "$pod_name" ]; then
echo "Available pods in namespace $namespace:"
kubectl get pods -n "$namespace" --no-headers | awk '{print $1}'
echo ""
echo "Usage: $0 logs [namespace] [pod-name]"
return
fi
kubectl logs -n "$namespace" "$pod_name" -f
}
# Function to access services
access() {
echo -e "${GREEN}Console Services Access Information${NC}"
echo ""
echo "Frontend: http://localhost:3000 (NodePort 30080)"
echo "Backend: http://localhost:8000 (NodePort 30081)"
echo ""
echo "These services are accessible because they use NodePort type"
echo "and are mapped in the KIND cluster configuration."
}
# Function to setup everything
setup() {
echo -e "${GREEN}Setting up complete KIND development environment${NC}"
create_cluster
deploy_namespaces
load_images
deploy_console
status
access
echo -e "${GREEN}✅ Setup complete!${NC}"
}
# Main script logic
case "${1:-}" in
create)
create_cluster
;;
delete)
delete_cluster
;;
deploy-namespaces)
deploy_namespaces
;;
load-images)
load_images
;;
deploy-console)
deploy_console
;;
status)
status
;;
logs)
logs "$2" "$3"
;;
access)
access
;;
setup)
setup
;;
*)
echo "Usage: $0 {create|delete|deploy-namespaces|load-images|deploy-console|status|logs|access|setup}"
echo ""
echo "Commands:"
echo " create - Create KIND cluster"
echo " delete - Delete KIND cluster"
echo " deploy-namespaces - Create namespaces"
echo " load-images - Load Docker images into cluster"
echo " deploy-console - Deploy console services"
echo " status - Show cluster status"
echo " logs [ns] [pod] - Show pod logs"
echo " access - Show service access information"
echo " setup - Complete setup (create + deploy everything)"
echo ""
exit 1
;;
esac