Major architectural transformation from synchronous to asynchronous processing:
## Pipeline Services (8 microservices)
- pipeline-scheduler: APScheduler for 30-minute periodic job triggers
- pipeline-rss-collector: RSS feed collection with deduplication (7-day TTL)
- pipeline-google-search: Content enrichment via Google Search API
- pipeline-ai-summarizer: AI summarization using Claude API (claude-sonnet-4-20250514)
- pipeline-translator: Translation using DeepL Pro API
- pipeline-image-generator: Image generation with Replicate API (Stable Diffusion)
- pipeline-article-assembly: Final article assembly and MongoDB storage
- pipeline-monitor: Real-time monitoring dashboard (port 8100)
## Key Features
- Redis-based job queue with deduplication
- Asynchronous processing with Python asyncio
- Shared models and queue manager for inter-service communication
- Docker containerization for all services
- Container names standardized with site11_ prefix
## Removed Services
- Moved to backup: google-search, rss-feed, news-aggregator, ai-writer
## Configuration
- DeepL Pro API: 3abbc796-2515-44a8-972d-22dcf27ab54a
- Claude Model: claude-sonnet-4-20250514
- Redis Queue TTL: 7 days for deduplication
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- RSS/Atom 피드 구독 및 관리 서비스 구현
- 자동 업데이트 스케줄러 포함 (기본 15분 주기)
- 피드 엔트리 읽음/별표 상태 관리
- 카테고리별 분류 기능
- OPML 내보내기 지원
- MongoDB 데이터 저장, Redis 캐싱
- Docker 컨테이너 구성 (포트 8017)
🤖 Generated with Claude Code (https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Implement multi-method search (Custom Search API, SerpAPI, web scraping)
- Support up to 20 results with pagination
- Add date filtering and sorting capabilities
- Include full content fetching option
- Add country/language specific search support
- Implement Redis caching for performance
- Create comprehensive documentation
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Replace file system storage with MinIO object storage
- Add MinIO cache implementation with 3-level directory structure
- Support dynamic switching between MinIO and filesystem via config
- Fix metadata encoding issue for non-ASCII URLs
- Successfully tested with various image sources including Korean URLs
All image service features working:
- Image proxy and download
- 5 size variants (thumb, card, list, detail, hero)
- WebP format conversion
- Cache hit/miss detection
- Background size generation
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add stopwords.txt and synonyms.txt for Solr
- Remove unsupported handlers from solrconfig.xml for Solr 9.x
- Add comprehensive test suite for all backend services
- Verify all 15 containers are running properly
- All services pass health checks successfully
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Implemented search service with Apache Solr instead of Elasticsearch
- Added full-text search, faceted search, and autocomplete capabilities
- Created data indexer for synchronizing data from MongoDB/Kafka to Solr
- Configured external volume mounts for all data services:
- MongoDB, Redis, Kafka, Zookeeper, MinIO, Solr
- All data now persists in ./data/ directory
- Added comprehensive search API endpoints
- Created documentation for data persistence and backup strategies
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Completed File Management Service with S3-compatible object storage:
Infrastructure:
- Added MinIO for S3-compatible object storage (port 9000/9001)
- Integrated with MongoDB for metadata management
- Configured Docker volumes for persistent storage
File Service Features:
- Multi-file upload support with deduplication
- Automatic thumbnail generation for images (multiple sizes)
- File metadata management with search and filtering
- Presigned URLs for secure direct uploads/downloads
- Public/private file access control
- Large file upload support with chunking
- File type detection and categorization
API Endpoints:
- File upload (single and multiple)
- File retrieval with metadata
- Thumbnail generation and caching
- Storage statistics and analytics
- Bucket management
- Batch operations support
Technical Improvements:
- Fixed Pydantic v2.5 compatibility (regex -> pattern)
- Optimized thumbnail caching strategy
- Implemented file hash-based deduplication
Testing:
- All services health checks passing
- MinIO and file service fully operational
- Ready for production use
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Step 10: Data Analytics and Statistics Service
- Created comprehensive statistics service with real-time metrics collection
- Implemented time-series data storage interface (InfluxDB compatible)
- Added data aggregation and analytics endpoints
- Integrated Redis caching for performance optimization
- Made Kafka connection optional for resilience
Step 11: Real-time Notification System
- Built multi-channel notification service (Email, SMS, Push, In-App)
- Implemented priority-based queue management with Redis
- Created template engine for dynamic notifications
- Added user preference management for personalized notifications
- Integrated WebSocket server for real-time updates
- Fixed pymongo/motor compatibility issues (motor 3.5.1)
Testing:
- Created comprehensive test suites for both services
- Added integration test script to verify cross-service communication
- All services passing health checks and functional tests
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
주요 기능:
- Statistics Service 마이크로서비스 구축
- 실시간 메트릭 수집 시스템 (Kafka 연동)
- 시계열 데이터베이스 인터페이스 구현
- 데이터 집계 및 분석 엔진
- 사용자/시스템/이벤트 분석 API
- WebSocket 기반 실시간 대시보드
- 알림 규칙 및 임계값 설정
- CSV 데이터 내보내기
구현된 컴포넌트:
- MetricsCollector: Kafka 이벤트 메트릭 수집
- DataAggregator: 시간별/일별 데이터 집계
- TimeSeriesDB: 시계열 데이터 저장 인터페이스
- WebSocketManager: 실시간 데이터 스트리밍
- Analytics APIs: 다양한 분석 엔드포인트
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- OAuth 2.0 서비스 구현
* Authorization Code, Client Credentials, Refresh Token 플로우 지원
* 애플리케이션 등록 및 관리 기능
* 토큰 introspection 및 revocation
* SSO 설정 지원 (Google, GitHub, SAML)
* 실용적인 스코프 시스템 (user, app, org, api 관리)
- 사용자 프로필 기능 확장
* 프로필 사진 및 썸네일 필드 추가
* bio, location, website 등 추가 프로필 정보
* 이메일 인증 및 계정 활성화 상태 관리
* UserPublicResponse 모델 추가
- OAuth 스코프 관리
* picture 스코프 추가 (프로필 사진 접근 제어)
* 카테고리별 스코프 정리 (기본 인증, 사용자 데이터, 앱 관리, 조직, API)
* 스코프별 승인 필요 여부 설정
- 인프라 개선
* Users 서비스 포트 매핑 추가 (8001)
* OAuth 서비스 Docker 구성 (포트 8003)
* Kafka 이벤트 통합 (USER_CREATED, USER_UPDATED, USER_DELETED)
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
- Kafka 및 Zookeeper 컨테이너 추가
- 공유 Kafka 라이브러리 생성 (Producer/Consumer)
- 이벤트 타입 정의 및 이벤트 모델 구현
- Users 서비스에 이벤트 발행 기능 추가 (USER_CREATED, USER_UPDATED, USER_DELETED)
- PROGRESS.md 및 PLAN.md 문서 생성
- aiokafka 통합 완료
- Integrated image-service from site00 as second microservice
- Maintained proxy and caching functionality
- Added Images service to docker-compose
- Configured Console API Gateway routing to Images
- Updated environment variables in .env
- Successfully tested image proxy endpoints
Services now running:
- Console (API Gateway)
- Users Service
- Images Service (proxy & cache)
- MongoDB & Redis
Next: Kafka event system implementation
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
- Added MongoDB and Redis containers to docker-compose
- Integrated Users service with MongoDB using Beanie ODM
- Replaced in-memory storage with persistent MongoDB
- Added proper data models with email validation
- Verified data persistence with MongoDB ObjectIDs
Services running:
- MongoDB: Port 27017 (with health checks)
- Redis: Port 6379 (with health checks)
- Users service: Connected to MongoDB
- Console: API Gateway routing working
Test: Users now stored in MongoDB with persistence
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
- Created Users service with full CRUD operations
- Updated Console to act as API Gateway for Users service
- Implemented service-to-service communication
- Added service health monitoring in Console
- Docker Compose now manages both services
Services running:
- Console (API Gateway): http://localhost:8011
- Users service: Internal network only
Test endpoints:
- Status: curl http://localhost:8011/api/status
- Users: curl http://localhost:8011/api/users/users🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>