Speculative optimization is a fundamental pattern in dynamic language compilation: assume the common case, optimize for it, verify the assumption at runtime, and gracefully fall back when wrong.

This pattern enables dramatic performance improvements in languages where types and behavior can change at runtime - by treating dynamic code as if it were static, but maintaining safety through verification.

The Core Pattern

Every speculative optimization follows this structure:

1. OBSERVE    → Watch runtime behavior, gather data
2. ASSUME     → Make educated guess about future behavior
3. OPTIMIZE   → Generate fast code based on assumption
4. GUARD      → Insert runtime check to verify assumption
5. FALL BACK  → Handle assumption violation gracefully

This pattern appears throughout YJIT execution mechanics and modern JIT compilers.

Example: Type-Specialized Arithmetic

Consider a simple Ruby method:

def double(x)
  x * 2
end

Without speculation: Must handle any type at runtime

# Interpreter logic (simplified):
def vm_multiply(left, right)
  case [left.class, right.class]
  when [Integer, Integer] then integer_multiply(left, right)
  when [Float, Float]     then float_multiply(left, right)
  when [String, Integer]  then string_repeat(left, right)
  # ... dozens of combinations
  else
    call_method(left, :*, right)
  end
end

Every multiplication checks types, dispatches to correct handler - slow but correct for all cases.

With speculation: Assume common case, optimize

; 1. OBSERVE: Saw Integer * Integer during profiling
; 2. ASSUME: x will be Integer in future
 
; 3. OPTIMIZE: Generate fast Integer code
double:
    ; 4. GUARD: Verify assumption
    test    rdi, 0x1          ; Check if x is Fixnum
    jz      deoptimize        ; If not, fall back
 
    ; Fast path: x is Integer
    lea     rax, [rdi + rdi]  ; x * 2 (bit shift)
    ret
 
    ; 5. FALL BACK: Handle assumption violation
deoptimize:
    call    vm_exec_core      ; Interpret bytecode
    ret

The speculative version is 10-100x faster when the assumption holds, and still correct when it doesn’t.

Why Speculation Works

Speculative optimization exploits two fundamental properties of programs:

1. Type Stability

Most variables have consistent types across executions:

def process_users(users)
  users.map { |u| u.name.upcase }
end
 
# users is always Array
# u is always User object
# u.name is always String
# Types stable across thousands of calls

Even in dynamic languages, 90% of code exhibits type stability. Speculation leverages this.

2. Path Predictability

Programs follow predictable execution paths:

def handle_request(req)
  if req.admin?
    admin_flow(req)  # Rarely executed
  else
    user_flow(req)   # Almost always executed
  end
end

After observing 1000 executions that take user_flow, speculate future executions will too.

Forms of Speculation

Speculation manifests in many ways:

Type Speculation

Assumption: Variable will have observed type

def calculate(x, y)
  x + y
end
 
# Observe: x and y are Integers
# Speculate: Generate Integer-only addition
# Guard: Verify both are still Integers

Constant Speculation

Assumption: Constant won’t change

class Config
  TIMEOUT = 30
end
 
def wait
  sleep(Config::TIMEOUT)
end
 
# Speculate: Inline TIMEOUT value (30) directly
# Guard: Invalidate if Config::TIMEOUT redefined

The instruction sequence can embed constant values, avoiding lookup overhead.

Method Target Speculation

Assumption: Method call will resolve to same target

users.each { |u| u.save }
 
# Observe: u.save resolves to ActiveRecord::Base#save
# Speculate: Inline save method directly
# Guard: Verify u.class hasn't changed

Monomorphic call sites (one target) are ideal. Polymorphic sites (multiple targets) reduce speculation benefits.

Branch Prediction Speculation

Assumption: Branch will take observed direction

def process(x)
  if x.valid?
    expensive_operation(x)
  end
end
 
# Observe: valid? returns true 99% of time
# Speculate: Optimize for true case
# Guard: Handle false efficiently when it occurs

Modern CPUs do this in hardware. JIT compilers do it in software.

The Spectrum of Speculation

Speculation exists on a spectrum from conservative to aggressive:

Conservative ←────────────────────→ Aggressive

Low risk          Medium risk         High risk
Low reward        Medium reward       High reward
Few guards        Some guards         Many guards
Stable perf       Variable perf       Unpredictable perf

Conservative Speculation

Only speculate on extremely stable patterns:

# Speculate only after 1000 observations showing same type
# Use strong guards (class check + frozen check + method cache)
# Re-compile immediately on guard failure

Pros: Stable performance, rarely wrong Cons: Miss optimization opportunities, slow warmup

Aggressive Speculation

Speculate early and often:

# Speculate after 10 observations
# Use weak guards (just type tag check)
# Tolerate occasional guard failures

Pros: Fast warmup, maximum optimization Cons: Unstable performance, wasted compilation

YJIT’s approach: Medium aggression - speculate after 25-30 calls, use reasonable guards, re-compile on persistent failures.

Multi-Level Speculation

Complex optimizations stack speculation:

def total_price(items)
  items.sum { |item| item.price * item.quantity }
end

Speculation stack:

  1. items is Array (type speculation)
  2. items contains Product objects (element type speculation)
  3. Product#price returns Float (return type speculation)
  4. Product#quantity returns Integer (return type speculation)
  5. Float * Integer multiplication (operation speculation)
  6. sum doesn’t have custom implementation (method speculation)

Each speculation has a guard. If any guard fails, entire optimization unwinds.

Guard Chain:
┌─────────────┐
│Guard: Array?│
└──────┬──────┘
       ↓ pass
┌─────────────────┐
│Guard: Product[]?│
└──────┬──────────┘
       ↓ pass
┌──────────────────┐
│Guard: price→Float│
└──────┬───────────┘
       ↓ pass
┌────────────────────┐
│Guard: quantity→Int │
└──────┬─────────────┘
       ↓ pass
┌──────────────┐
│ FAST PATH    │
│ Optimized    │
│ native code  │
└──────────────┘

Any guard fails → Deoptimize

The more speculation layers, the more fragile the optimization.

Graceful Degradation

The “fall back” phase is critical. When speculation fails, the system must:

  1. Preserve correctness - Never execute wrong code
  2. Save state - Restore interpreter state properly
  3. Learn from failure - Update assumptions based on new data
  4. Retry when appropriate - Re-compile with better information
# First compilation: Assume Integer
def process(x)
  x * 2
end
 
process(5)    # Guard passes
process(10)   # Guard passes
process(3.14) # Guard FAILS
 
# De-optimization:
# 1. Exit native code safely
# 2. Interpret with Float correctly
# 3. Update profiling data: "also sees Float"
# 4. Re-compile with both Integer and Float paths

This de-optimization process enables adaptive optimization - getting smarter over time.

The Paradox of Dynamic Optimization

Speculative optimization reveals a fundamental paradox:

To make dynamic languages fast, treat them as static - but verify they stay static.

Dynamic languages offer flexibility: types change, methods redefine, behavior varies. But this flexibility kills performance: the interpreter must handle all possibilities.

Speculation resolves this: assume static behavior, run static code, but check you’re still in the static case.

It’s “having your cake and eating it too” - dynamic semantics with static performance.

When Speculation Fails

Not all code benefits from speculation:

Highly Polymorphic Code

def stringify(value)
  value.to_s
end
 
# Called with 20 different types
# Guards fail constantly
# Compilation overhead > benefits

Solution: Don’t speculate. Just interpret or use generic code.

Unpredictable Branches

def random_operation(x)
  if rand > 0.5
    heavy_computation(x)
  else
    quick_return(x)
  end
end
 
# Branch direction changes randomly
# Branch prediction fails
# Speculative execution wasted

Solution: Compile both paths equally, no speculation on direction.

Mutating Code

def dynamic_behavior
  # Methods redefined at runtime
  # Constants changed frequently
  # Classes modified on every call
end

Solution: Constant invalidation makes speculation impossible. Accept interpretation.

Measuring Speculation Quality

Good speculation has:

  1. High hit rate: Guards pass >95% of time
  2. Low overhead: Guard cost < optimization benefit
  3. Fast recovery: De-optimization doesn’t destroy performance
  4. Adaptive learning: Re-compilation improves after failures
# YJIT provides stats:
--yjit-stats
 
# Key metrics:
# - Guard hit rate: % of guard passes
# - De-optimization count: How often speculation fails
# - Re-compilation frequency: Adaptation rate

Historical Evolution

Speculative optimization evolved across language implementations:

Smalltalk (1980s): First inline caching - speculate method targets Self (1990s): Polymorphic inline caching - speculate multiple targets Java HotSpot (2000s): Tiered compilation - speculate at multiple levels JavaScript V8 (2010s): Hidden classes - speculate object shapes Ruby YJIT (2020s): Bytecode-level type speculation

Each generation became more sophisticated: better profiling, smarter speculation, faster guards, more graceful fallbacks.

The Future: Probabilistic Speculation

Emerging trend: probabilistic speculation based on confidence levels:

If confidence > 99%: Aggressive speculation
If confidence > 90%: Standard speculation
If confidence > 70%: Conservative speculation
If confidence < 70%: No speculation

Machine learning models could predict:

  • Which types will appear (not just which appeared)
  • When guard failures will occur
  • Whether speculation will pay off

This is speculation about speculation - meta-optimization.

Practical Implications

Understanding speculation guides code design:

1. Maintain Type Stability

# Bad: Types change
def process(x)
  x.is_a?(String) ? x.upcase : x.to_s.upcase
end
 
# Good: Types consistent
def process(x)
  x.to_s.upcase  # Always String after to_s
end

2. Avoid Runtime Mutation

# Bad: Changes assumptions
class Calculator
  def multiply(x, y)
    x * y
  end
end
 
# Later: Redefine (invalidates speculation)
class Calculator
  def multiply(x, y)
    (x * y) + 1
  end
end
 
# Good: Configuration over mutation
class Calculator
  def initialize(offset = 0)
    @offset = offset
  end
 
  def multiply(x, y)
    (x * y) + @offset  # Method stable, data varies
  end
end

3. Separate Polymorphic Cases

# Bad: One method, many types
def handle(value)
  case value
  when Integer then integer_logic(value)
  when String then string_logic(value)
  when Array then array_logic(value)
  end
end
 
# Good: Separate methods
def handle_integer(n)
  integer_logic(n)
end
 
def handle_string(s)
  string_logic(s)
end
 
def handle_array(a)
  array_logic(a)
end

Speculative optimization is perhaps the most important pattern in modern dynamic language implementation. It’s the bridge between dynamic semantics and static performance - assuming stability in a world of change, optimizing for patterns while handling chaos, and making flexibility fast without sacrificing correctness.

The art of speculation is knowing when to trust, when to verify, and when to admit you were wrong.