Visual-Unicorn-WebServer-Architecture
Unicorn Web Server Architecture
curl -X GET /api/users
fetch('/api/posts')
HTTParty.get('/health')
axios.post('/login')
⬇
upstream unicorn_backend
⬇
Master Process (PID: 1234)
fork() workers • listen on socket • graceful restarts
Worker 1 (PID: 1235)
:8080
Worker 2 (PID: 1236)
:8080
Worker 3 (PID: 1237)
:8080
Worker 4 (PID: 1238)
:8080
Process Management Demo
# Unicorn configuration
worker_processes 4 # One per CPU core
listen ":8080"
timeout 30
before_fork do |server, worker|
# Disconnect database before fork
ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
# Reconnect database in worker
ActiveRecord::Base.establish_connection
end
How Fork Magic Works
1. fork() system call: Creates identical process copy2. Copy-on-write: Memory shared until modified
3. Socket inheritance: All workers listen on same port
4. Kernel load balancing: accept() distributes requests
5. Process isolation: Crash = kill -9 one worker only
Production Benefits
- Zero downtime deploys: Rolling worker restarts
- Memory leak immunity: Workers recycled periodically
- CPU affinity: Workers pinned to specific cores
- Predictable RAM usage: No thread overhead
- Signal handling: SIGUSR2 for graceful restart
- Process monitoring: Easy with tools like monit
Engineering Trade-offs
- Memory per worker: Full Rails app loaded
- Cold boot time: Workers start sequentially
- Horizontal scaling: More servers vs more threads
- Connection limits: worker_connections * workers
- Database connections: Pool per worker process
- Shared state: Redis/Memcached required
Why GitHub, Shopify, Stripe Choose This
Operational Excellence over Raw Performance:• Failure isolation prevents cascading crashes
• Memory leaks auto-heal via worker recycling
• Zero shared state = zero race conditions
• Battle-tested Unix primitives (1970s tech still works)
• Simple mental model for debugging production issues
"The best system is the one that fails predictably and recovers automatically."