Server settings
title: Microservice Architecture
This guide walks you through setting up GoFlow as a production-ready microservice.
Architecture Overview
GoFlow is designed to run as three separate services:
Three distinct services:
- API Server - Handles HTTP/WebSocket requests (scale horizontally)
- Worker - Processes queued jobs (scale to match load)
- Scheduler - Runs cron jobs (single instance to prevent duplicates)
Step 1: Project Structure
Create your project with this structure:
This separation allows each service to scale independently and be deployed separately if needed.
Step 2: Configuration
Create config.yaml:
Environment variables (prefixed with ${}) are substituted at runtime, keeping secrets out of your config file.
Step 3: API Server
Create cmd/api/main.go:
The API server is stateless - it reads from and writes to DragonflyDB. This means you can run as many replicas as needed behind a load balancer.
Step 4: Worker Service
Create cmd/worker/main.go:
Workers are the workhorses - scale them based on your queue depth. If jobs are piling up, add more workers.
Step 5: Job Handlers
Create internal/handlers/handlers.go:
Each handler is a simple function that receives a job and returns an error. If you return an error, the job will be retried (up to max_retries).
Step 6: Scheduler Service
Create cmd/scheduler/main.go:
Important: Only run ONE scheduler instance. Multiple schedulers would trigger duplicate cron jobs. Docker Swarm handles this with replicas: 1.
Step 7: Dockerfile
Create a multi-stage Dockerfile:
The multi-stage build keeps the final image small (~20MB). All three binaries are included - the entrypoint determines which runs.
Step 8: Docker Compose (Development)
Create docker-compose.yml for local development:
Start everything with:
This gives you a complete local environment with API, workers, scheduler, and database.
Step 9: Docker Swarm (Production)
Create docker-stack.yml for production:
Deploy to Swarm:
Environment Variables
| Variable | Description | Default |
|---|---|---|
DATABASE_ADDRESS | DragonflyDB/Redis address | localhost:6379 |
SERVER_PORT | API server port | 8080 |
WORKER_CONCURRENCY | Jobs per worker | 10 |
OPENAI_API_KEY | OpenAI API key | - |
WEBHOOK_SECRET | Webhook signature secret | - |
Monitoring
Add Prometheus metrics:
Key metrics to watch:
goflow_queue_depth- Jobs waitinggoflow_jobs_completed_total- Throughputgoflow_jobs_failed_total- Error rategoflow_job_duration_seconds- Processing time
Health Checks
Add health endpoints to your API:
Use these in Docker/Kubernetes for health checks and load balancer routing.
Next Steps
- Scaling - Advanced scaling strategies
- Deployment - Production deployment options
- Webhooks - External integrations
