Consistency Models Explained by Their Anomalies

Consistency models are often introduced as abstract rules: causal, sequential, linearizable, and so on. But what they really do is draw boundaries around the kinds of mistakes a distributed system can make.

If consistency is about agreement, anomalies are the cracks where that agreement breaks. Each model exists precisely to rule out a set of anomalies while tolerating others for the sake of performance or availability.

A consistency anomaly is an odd, unintuitive behaviour that appear when replicas drift or reorder updates. They arise because distributed systems relax coordination to stay available:
• Updates may arrive late (stale reads)
• Effects may appear before causes (causal violation)
• Concurrent writes may overwrite each other (write conflict)

Stronger models simply remove more of these anomalies. We can therefore understand the hierarchy of consistency not by its definitions, but by the anomalies it forbids.

Anomaly What Happens Allowed Under Prevented By
Stale Read A replica serves an outdated value even though a newer write exists elsewhere. Eventual, Causal (sometimes) Sequential, Linearizable
Non-Monotonic Read A client observes newer data first, then older data later when switching replicas. Eventual Monotonic Reads, Session Consistency
Read-Your-Writes Violation A client fails to see its own previous updates in subsequent reads. Eventual Read-My-Writes, Causal, Linearizable
Causal Violation An effect appears before its cause — for example, a reply before the original message. Eventual, Monotonic Reads Causal, Sequential, Linearizable
Concurrent Write Conflict Two clients update the same data concurrently on different replicas; one overwrites or conflicts with the other. Eventual, Causal (without conflict resolution) Systems enforcing Total Order Broadcast or consensus (Sequential, Linearizable)
Real-Time Violation A later operation in physical time appears before an earlier one in the global order. Sequential Linearizable
Non-Convergence (Divergent State) Replicas permanently disagree after conflicting updates due to lack of reconciliation. Weak eventual consistency, network partitions Eventual (with repair), Linearizable

Consistency models can also be viewed as progressive filters, each model forbids a new class of anomalies while keeping weaker ones tolerable. The stronger the model, the fewer ways replicas can misbehave.

Consistency Model Anomalies It Prevents Still Allows
Eventual consistency Guarantees only convergence. Stale reads, causal and visibility anomalies.
Read-My-Writes Removes self-visibility anomalies. Causal and ordering anomalies across clients.
Monotonic Reads Prevents regressions in a client’s own view. Cross-client ordering anomalies.
Causal consistency Eliminates causal violations and maintains dependency order. Concurrent reordering and temporary staleness.
Sequential consistency Ensures all replicas share one total order. Real-time misordering possible.
Bounded staleness Limits the age of stale reads to a known bound. Temporal misalignment within bound.
Linearizability Adds real-time alignment to total order. None at single-object scope.
Strict serializability Extends linearizability to multi-object transactions. None (ideal consistency).

Conclusion

Every consistency model is a trade-off between coordination and chaos. The weaker the model, the more anomalies can leak through; the stronger the model, the fewer surprises clients will see.

When we describe models in terms of the anomalies they prevent, the hierarchy of consistency becomes obvious, not as a list of definitions, but as a sequence of repaired mistakes.

Show Comments