Conflict resolution strategies are approaches for merging divergent versions of data that arise in eventual consistency systems when concurrent writes occur. The challenge: how do you decide which version is “correct” when multiple valid updates happen simultaneously?

Why Conflicts Occur

In systems prioritizing availability (like Dynamo), replicas accept writes independently. This creates scenarios where:

  1. Network partition: Client A writes to replica 1, Client B writes to replica 2, network prevents synchronization
  2. Concurrent clients: Two clients update the same key at nearly the same time on different coordinators
  3. Node failure: Write succeeds on W replicas, but different W replicas each time

Vector Clocks detect these conflicts by identifying concurrent versions (neither is ancestor of the other), but don’t automatically resolve them.

sequenceDiagram
    participant C1 as Client 1
    participant R1 as Replica 1
    participant R2 as Replica 2
    participant C2 as Client 2

    Note over R1,R2: Initial value: cart = [A, B]<br/>Clock: [(Sx, 2)]

    C1->>R1: Add item C
    R1->>R1: cart = [A,B,C]<br/>Clock: [(Sx, 2), (Sy, 1)]

    C2->>R2: Add item D
    R2->>R2: cart = [A,B,D]<br/>Clock: [(Sx, 2), (Sz, 1)]

    Note over R1,R2: Conflict: Two versions exist<br/>Neither is ancestor of the other

Syntactic Reconciliation

Syntactic reconciliation uses causal information (vector clocks) to automatically resolve conflicts without understanding data semantics.

Simplicity Requires Data Loss

Last-write-wins is the simplest conflict resolution—it works by silently discarding concurrent updates, making conflicts disappear through deletion.

Strategy 1: Last-Write-Wins (LWW)

Use timestamps to pick the “most recent” write.

Algorithm:

if version1.timestamp > version2.timestamp:
  keep version1
else:
  keep version2

Advantages:

  • Simple, no application logic needed
  • Always produces single value
  • Works for any data type

Disadvantages:

  • Data loss: Discards concurrent writes
  • Relies on clock synchronization (problematic in distributed systems)
  • Arbitrary choice when timestamps are equal

When to use: Caches, session state, or other data where losing updates is acceptable

graph LR
    V1[Version 1<br/>cart = A,B,C<br/>timestamp: 1000] --> LWW{Last Write Wins}
    V2[Version 2<br/>cart = A,B,D<br/>timestamp: 1002] --> LWW

    LWW --> Result[Keep Version 2<br/>cart = A,B,D<br/>❌ Lost item C]

Strategy 2: Vector Clock Dominance

When one version’s vector clock descends from another, automatically keep the descendant.

Algorithm:

if all counters in V1 ≥ counters in V2 and at least one >:
  V1 descends from V2, keep V1
else if all counters in V2 ≥ counters in V1 and at least one >:
  V2 descends from V1, keep V2
else:
  concurrent, requires semantic reconciliation

Advantages:

  • Preserves causality
  • No clock synchronization needed
  • Automatic when possible

Disadvantages:

  • Only works for sequential updates (most conflicts are concurrent)
  • Requires vector clocks infrastructure

This is the first line of defense—automatic when causality is clear.

Semantic Reconciliation

Semantic reconciliation uses application-specific business logic to merge concurrent versions meaningfully.

Strategy 3: Application-Driven Merge

The application receives all conflicting versions and produces a merged result.

In Dynamo:

  1. Read operation returns multiple concurrent versions
  2. Client application examines all versions
  3. Application merges them based on business logic
  4. Client writes merged version back (becomes new “official” version)

Example: Shopping Cart

Version 1: [item A, item B, item C]  Clock: [(Sx, 2), (Sy, 1)]
Version 2: [item A, item B, item D]  Clock: [(Sx, 2), (Sz, 1)]

Application merge logic:
  Merged: [item A, item B, item C, item D]  (union)
  New clock: [(Sx, 3), (Sy, 1), (Sz, 1)]

Merge strategies by data type:

Sets (shopping carts, wish lists):

  • Union: Merge both sets (never lose items)
  • Rationale: Customer adding items should never lose them

Counters (likes, views):

  • Sum: Add concurrent increments
  • Requires careful design (see CRDTs)

Text documents:

  • Operational transformation: Merge character-level edits
  • Used in collaborative editing (Google Docs)

Configuration:

  • Semantic merge: Combine non-conflicting settings, flag true conflicts
  • Example: Different users update different config keys—merge; same key—flag for manual resolution
graph TB
    subgraph "Application-Driven Merge"
        V1[Version 1<br/>cart: A,B,C]
        V2[Version 2<br/>cart: A,B,D]

        V1 --> App[Application Logic]
        V2 --> App

        App --> Decision{Merge Strategy}
        Decision --> Union[Union: A,B,C,D<br/>✓ No data loss]
    end

Strategy 4: Three-Way Merge

When the common ancestor is known, perform three-way merge.

Algorithm:

Ancestor: [A, B]
Version 1: [A, B, C]  (added C)
Version 2: [A, B, D]  (added D)

Changes:
  V1 vs Ancestor: +C
  V2 vs Ancestor: +D

Merged: [A, B, C, D]  (apply both changes)

Advantages:

  • Distinguishes “added” from “was always there”
  • Reduces false conflicts
  • Used in version control (Git)

Disadvantages:

  • Requires storing ancestor or reconstructing it from history
  • More complex implementation

Strategy 5: Conflict-Free Replicated Data Types (CRDTs)

Design data structures where concurrent operations commute—merging is automatic and deterministic.

Examples:

Grow-only set (G-Set):

  • Only support add operation
  • Merge: union of sets
  • Always converges, no conflicts

PN-Counter (Positive-Negative Counter):

  • Track increments and decrements separately
  • Each replica maintains local counts
  • Merge: sum all increments, sum all decrements, compute difference

LWW-Register:

  • Last-write-wins with vector clock ordering
  • Each write tagged with timestamp
  • Merge: keep entry with highest timestamp

Advantages:

  • No application logic for conflicts
  • Guaranteed convergence
  • Mathematically proven correctness

Disadvantages:

  • Limited to specific data types
  • Can’t express arbitrary data structures
  • Sometimes requires more storage (e.g., tombstones for deletions)
graph LR
    subgraph "CRDT: G-Set Example"
        R1[Replica 1<br/>add C<br/>set: A,B,C]
        R2[Replica 2<br/>add D<br/>set: A,B,D]

        R1 --> Merge[Automatic Merge<br/>Union operation]
        R2 --> Merge

        Merge --> Result[Converged<br/>set: A,B,C,D<br/>✓ Deterministic]
    end

Choosing a Strategy

Data TypeRecommended StrategyRationale
Shopping cartApplication merge (union)Never lose customer actions
User profileThree-way mergePreserve independent field updates
Session stateLast-write-winsStaleness acceptable, simplicity valued
CounterCRDT (PN-Counter)Mathematical guarantees, automatic
Text documentOperational transformationCharacter-level precision
ConfigurationSemantic merge + manual resolutionConflicts rare, correctness critical
graph TD
    Start[Conflict Detected]
    Start --> Q1{Sequential or<br/>concurrent?}

    Q1 -->|Sequential| Auto[Vector clock<br/>dominance]
    Q1 -->|Concurrent| Q2{Data type?}

    Q2 -->|Set| Union[Union merge]
    Q2 -->|Counter| CRDT[PN-Counter CRDT]
    Q2 -->|Text| OT[Operational<br/>transformation]
    Q2 -->|Other| Q3{Losing data<br/>acceptable?}

    Q3 -->|Yes| LWW[Last-write-wins]
    Q3 -->|No| AppMerge[Application-driven<br/>semantic merge]

Production Experience: Dynamo

Amazon’s real-world insights from Dynamo:

Conflicts are rare: 99.94% of reads see single version

  • Most “potential” conflicts don’t happen (sequential writes to same coordinator)
  • Network is reliable enough that replicas stay synchronized

Primary cause: High concurrent write rate, often from automation/bots

  • Not network failures as initially expected
  • Solution: Rate-limit bots, optimize for sequential writes

Shopping cart merge: Union strategy works well

  • Customers almost never complain about “extra” items
  • Better than losing items from failed writes

Timestamp reconciliation: Used for session state

  • Losing occasional session updates acceptable
  • Simplicity preferred over perfect consistency

Write Speed Through Read Complexity

Accepting all writes immediately and detecting conflicts later inverts traditional database design—availability comes from deferring hard decisions.

Pushing Complexity to Reads

A key design principle in eventual consistency systems:

Write path: Simple and fast

  • Always accept writes
  • Store conflicts, don’t resolve
  • Return success immediately

Read path: Complex but less frequent

  • Detect conflicts using vector clocks
  • Return all versions to client
  • Client resolves and writes back

This ensures writes never fail (critical for availability), while complexity is handled during reads when the application can apply business logic.

Key Insight

There is no universal “best” conflict resolution strategy—the right choice depends on:

  • Data semantics: What does the data represent?
  • Business requirements: Is data loss acceptable?
  • Conflict frequency: Rare conflicts → simpler strategies; frequent conflicts → invest in sophisticated resolution
  • Application capability: Can the app implement merge logic?

The power of eventual consistency comes from flexibility: different data types in the same system can use different resolution strategies. Shopping carts use union, session state uses LWW, user profiles use three-way merge—all coexisting in one datastore.

This demonstrates that building available systems isn’t about avoiding conflicts—it’s about handling them gracefully when they inevitably occur. The art lies in matching the resolution strategy to the data’s meaning and the application’s requirements.