Introduced in Sidekiq 7, Capsules allow a single process to operate multiple independent queue pools. Each Capsule has its own Redis connection pool, queue list, and Concurrency setting. This enables multi-tenancy or workload isolation without running separate processes, demonstrating Twelve-Factor App principles through configuration-driven behavior.

Architecture

A Capsule encapsulates everything needed to process a set of queues:

Sidekiq.configure_server do |config|
  # Default capsule
  config.queues = %w[default low]
  config.concurrency = 10
 
  # Critical workload capsule
  config.capsule("critical") do |cap|
    cap.concurrency = 5
    cap.queues = %w[critical urgent]
    cap.redis = { url: "redis://critical-redis:6379" }
  end
 
  # Tenant-specific capsule
  config.capsule("tenant_123") do |cap|
    cap.concurrency = 3
    cap.queues = %w[tenant_123_jobs]
    cap.redis = { url: "redis://tenant-123-redis:6379" }
  end
end

Each Capsule runs its own Manager with dedicated Processor threads. The Launcher coordinates all Capsules but they operate independently.

Launcher
├── Default Capsule (10 threads)
│   ├── Manager → 10 Processors
│   └── Redis Pool (default)
├── Critical Capsule (5 threads)
│   ├── Manager → 5 Processors
│   └── Redis Pool (critical-redis)
└── Tenant Capsule (3 threads)
    ├── Manager → 3 Processors
    └── Redis Pool (tenant-123-redis)

Use Cases

Workload Isolation: Prevent low-priority bulk jobs from starving high-priority user-facing jobs:

# Separate Redis instances for isolation
config.capsule("bulk") do |cap|
  cap.queues = %w[imports exports analytics]
  cap.concurrency = 20  # High throughput for batch work
  cap.redis = { url: ENV['BULK_REDIS_URL'] }
end
 
config.capsule("realtime") do |cap|
  cap.queues = %w[notifications emails]
  cap.concurrency = 5   # Lower concurrency, faster response
  cap.redis = { url: ENV['REALTIME_REDIS_URL'] }
end

Multi-Tenancy: Dedicate resources per tenant without process overhead:

TENANTS.each do |tenant_id|
  config.capsule("tenant_#{tenant_id}") do |cap|
    cap.queues = ["tenant_#{tenant_id}_jobs"]
    cap.concurrency = tenant.paid? ? 10 : 2  # Different SLA tiers
    cap.redis = { url: tenant.redis_url }
  end
end

This scales to hundreds of tenants per process. Each tenant gets isolated queues and Redis, but shares the Ruby process memory—much more efficient than per-tenant processes.

Geographic Distribution: Route jobs to region-specific Redis instances:

config.capsule("us-east") do |cap|
  cap.queues = %w[us_east_jobs]
  cap.redis = { url: "redis://us-east-redis:6379" }
end
 
config.capsule("eu-west") do |cap|
  cap.queues = %w[eu_west_jobs]
  cap.redis = { url: "redis://eu-west-redis:6379" }
end

Jobs execute in the process, but Redis coordination happens regionally. This reduces cross-region latency for job metadata.

Thread-Local Routing

Each Processor thread belongs to exactly one Capsule. Thread-local storage routes Redis connections automatically:

# Inside a job
class MyJob
  def perform
    # Automatically uses the correct Redis pool
    Sidekiq.redis { |conn| conn.get("key") }
    # Thread.current[:sidekiq_capsule] determines which pool
  end
end

This implicit routing avoids passing capsule context through every method call. However, it creates coupling—jobs can’t easily switch capsules at runtime.

Configuration Patterns

Capsules excel at runtime configuration without code changes:

# config/sidekiq.yml
:concurrency: 10
:queues:
  - default
 
:capsules:
  critical:
    :concurrency: 5
    :queues:
      - critical
      - urgent
    :redis:
      :url: <%= ENV['CRITICAL_REDIS_URL'] %>

This YAML can be changed per environment (development, staging, production) without touching Ruby code. The Twelve-Factor App principle in action—configuration in the environment, not the codebase.

Resource Management

Each Capsule maintains its own connection pool sized to its concurrency:

# Capsule with concurrency 5 gets a pool of size 5
cap.redis = { url: "...", size: 5 }  # Explicit size
# Or automatically sized to concurrency if omitted

This prevents connection exhaustion—each thread can hold a Redis connection without blocking. Total connections = sum of all capsule concurrencies.

For a process with 3 Capsules (10, 5, 3 threads), you need 18 Redis connections to the respective instances. Plan Redis maxclients accordingly.

Limitations

Shared Memory: All Capsules share the same Ruby heap. A memory leak in one Capsule’s job code affects the entire process. This is the fundamental trade-off for process efficiency.

No Cross-Capsule Batches: Batches can’t span multiple Capsules since each uses separate Redis. Design workflows within capsule boundaries.

Process-Level Shutdown: Graceful shutdown affects all Capsules simultaneously. You can’t restart individual Capsules—it’s all or nothing.

Migration Strategy

Start with a single default Capsule (standard Sidekiq behavior). Add specialized Capsules as needs emerge:

  1. Identify bottlenecks: Which queue types compete for resources?
  2. Separate Redis first: Move high-volume or high-priority queues to dedicated Redis
  3. Configure Capsule: Add capsule configuration pointing to new Redis
  4. Monitor resource usage: Ensure thread counts and connections are balanced
  5. Iterate: Add more Capsules as workload patterns emerge

Don’t over-engineer upfront—Capsules shine when you have clear resource contention or isolation requirements.

See Sidekiq Architecture for how Capsules integrate with the overall process lifecycle.