The CAP theorem famously states that it is impossible to guarantee both consistency and availability in the event of a network partition. Since network partitions are always theoretically possible in a scalable, distributed system, the architects of modern scalable database systems fractured into two camps: those that prioritized availability (the NoSQL camp) and those that prioritized consistency (the NewSQL camp). For a while, the NoSQL camp was clearly the more dominant of the two --- in an “always-on” world, downtime is unacceptable, and developers were forced into handling the reduced consistency levels of scalable NoSQL systems. [Side note: NoSQL is a broad umbrella that contains many different systems with different features and innovations. When this post uses the term “NoSQL”, we are referring to the subset of the umbrella that is known for building scalable systems that prioritize availability over consistency, such as Cassandra, DynamoDB, Voldemort, CouchDB, Riak, and multi-region deployments of Azure CosmosDB.]
Over the past decade, application developers have discovered that it is extremely difficult to build bug-free applications over database systems that do not guarantee consistency. This has led to a surprising shift in momentum, with many of the more recently released systems claiming to guarantee consistency (and be CP from CAP). Included in this list of newer systems are: Spanner (and its Cloud Spanner counterpart), FaunaDB, CockroachDB, and YugaByte. In this post, we will look more deeply into the consistency claims of these four systems (along with similar systems) and note that while some do indeed guarantee consistency, way too many of them fail to completely guarantee consistency. We will trace the failure to guarantee consistency to a controversial design decision made by Spanner that has been tragically and imperfectly emulated in other systems.
Consistency, also known as “atomic consistency” or “linearizability”, guarantees that once a write completes, all future reads will reflect that value of the write. For example, let’s say that we have a variable called X, whose value is currently 4. If we run the following code:
X = 10;
Y = X + 8;
In a consistent system, there is only one possible value for Y after running this code (assuming the second statement is run after the first statement completes): 18. Everybody who has completed an “Introduction to Programming” course understands how this works, and relies on this guarantee when writing code.
In a system that does not guarantee consistency, the value of Y after running this code is also probably 18. But there’s a chance it might be 12 (since the original value of X was 4). Even if the system returns an explicit message: “I have completed the X = 10 statement”, it is nonetheless still a possibility that the subsequent read of X will reflect the old value (4) and Y will end up as 12. Consequently, the application developer has to be aware of the non-zero possibility that Y is not 18, and must deal with all possible values of Y in subsequent code. This is MUCH more complicated, and beyond the intellectual capabilities of a non-trivial subset of application developers.
[Side note: Another name for "consistency" is "strong consistency". This alternate name was coined in order to distinguish the full consistency guarantee from weaker consistency levels that also use the word "consistency" in their name (despite not providing the complete consistency guarantee). Indeed, some of these weaker consistency levels, such as "causal consistency", "session consistency", and "bounded staleness consistency" provide useful guarantees that somewhat reduce complexity for application developers. Nonetheless, the best way to avoid the existence of corner case bugs in an application is to build it on top of a system that guarantees complete, strong consistency.]
Consistency is a basic staple, a guarantee that is extremely hard to live without. So why do most NoSQL systems fail to guarantee consistency? They blame the CAP theorem. (For example, the Amazon Dynamo paper , which inspired many widely used NoSQL systems, such as Cassandra, DynamoDB, and Riak, mention the availability vs. consistency tradeoff in the first paragraph of the section that discussed their “Design Considerations”, which lead to their famous “eventually consistent” architecture.) It is very hard , but not impossible, to build applications over systems that do not guarantee consistency. But the CAP theorem says that it is impossible for a system that guarantees consistency to guarantee 100% availability in the presence of a network partition. So if you can only choose one, it makes sense to choose availability. As we said above, once the system fails to guarantee consistency, developing applications on top of it without ugly corner case bugs is extremely challenging, and generally requires highly-skilled application developers that are able to handle the intellectual rigors of such development environments. Nonetheless, such skilled developers do exist, and this is the only way to avoid the impossibility proof from the CAP theorem of 100% availability.
The reasoning of the previous paragraph, although perhaps well-thought out and convincing, is fundamentally flawed. The CAP theorem lives in a theoretical world where there is such a thing as 100% availability. In the real world, there is no such thing as 100% availability. Highly available systems are defined in terms of ‘9s’. Are you 99.9% available? Or 99.99% available? The more 9s, the better. Availability is fundamentally a pursuit in imperfection. No system can guarantee availability.
This fact has significant ramifications when considering the availability vs. consistency tradeoff that was purported by the CAP theorem. It is not the case that if we guarantee consistency, we have to give up the guarantee of availability. We never had a guarantee of availability in the first place! Rather, guaranteeing consistency causes a reduction to our already imperfect availability.
Therefore: the question becomes: how much availability is lost when we guarantee consistency? In practice, the answer is very little . Systems that guarantee consistency only experience a necessary reduction in availability in the event of a network partition. As networks become more redundant, partitions become an increasingly rare event. And even if there is a partition, it is still possible for the majority partition to be available. Only the minority partition must become unavailable. Therefore, for the reduction in availability to be perceived, there must be both a network partition, and also clients that are able to communicate with the nodes in the minority partition (and not the majority partition). This combination of events is typically rarer than other causes of system unavailability. Consequently, the real world impact of guaranteeing consistency on availability is often negligible. It is very possible to have a system that guarantees consistency and achieves high availability at the same time.
[Side note: I have writtenextensively about these issues with the CAP theorem. I believe the PACELC theoremis better able to summarize consistency tradeoffs in distributed systems.]
The argument above actually results in 3 distinct reasons for modern systems to be CP from CAP, instead of AP (i.e. choose consistency over availability):
(1) Systems that fail to guarantee consistency result in complex, expensive, and often buggy application code.
(2) The reduction of availability that is caused by the guarantee of consistency is minute, and hardly noticeable for many deployments.
(3) The CAP theorem is fundamentally asymmetrical. CP systems can guarantee consistency. AP systems do not guarantee availability (no system can guarantee 100% availability). Thus only one side of the CAP theorem opens the door for any useful guarantees.
I believe that the above three points is what has caused the amazing renaissance of distributed, transactional database systems --- many of which have become commercially available in the past few years --- that choose to be CP from CAP instead of AP. There is still certainly a place for AP systems, and their associated NoSQL implementations. But for most developers, building on top of a CP system is a safer bet.
However, when I say that CP systems are the safer bet, I intend to refer to CP systems that actually guarantee consistency. Unfortunately, way too many of these modern NewSQL systems fail to guarantee consistency, despite their claims to the contrary. And once the guarantee is removed, the corner case bugs, complexity, and costs return.
I have discussed in previous posts that there are many ways to guarantee consistency in distributed systems. The most popular mechanism, which guarantees consistency at minimal cost to availability, is to use the Paxos or Raft consensus protocols to enforce consistency across multiple replicas of the data. At a simplified level, these protocols work via a majority voting mechanism. Any change to the data requires a majority of replicas to agree to the change. This allows the minority of replicas to be down or unavailable and the system can nonetheless continue to read or write data.
Most NewSQL systems use consensus protocols to enforce consistency. However, they differ in a significant way in how they use these protocols. I divide NewSQL systems into two categories along this dimension: The first category, as embodied in systems such as Calvin (which came out of my research group) and FaunaDB, uses a single, global consensus protocol per database. Every transaction participates in the same global protocol. The second category, as embodied in systems such as Spanner, CockroachDB, and YugaByte, partitions the data into ‘shards’, and applies a separate consensus protocol per shard.
The main downside of the first category is scalability. A server can process a fixed number of messages per second. If every transaction in the system participates in the same consensus protocol, the same set of servers vote on every transaction. Since voting requires communication, the number of votes per second is limited by the number of messages each server can handle. This limits the total amount of transactions per second that the system can handle.
Calvin and FaunaDB get around this downside via batching. Rather than voting on each transaction individually, they vote on batches of transactions. Each server batches all transactions that it receives over a fixed time period (e.g., 10 ms), and then initiates a vote on that entire batch at once. With 10ms batches, Calvin was able to achieve a throughput of over 500,000 transactions per second. For comparison, Amazon.com and NASDAQ likely process no more than 10,000 orders/trades per second even during peak workloads.
The main downside of the second category is that by localizing consensus on a per-shard basis, it becomes nontrivial to guarantee consistency in the presence of transactions that touch data in multiple shards. The quintessential example is the case of someone performing a sequence of two actions on a photo-sharing application (1) Removing her parents from having permission to see her photos (2) Posting her photos from spring break. Even though there was a clear sequence of these actions from the vantage point of the user, if the permissions data and the photo data are located in separate shards, and the shards perform consensus separately, there is a risk that the parents will nonetheless be able to see the user’s recently uploaded photos.
Spanner famously got around this downside with their TrueTime API. All transactions receive a timestamp which is based on the actual (wall-clock) current time. This enables there to be a concept of “before” and “after” for two different transactions, even those that are processed by completely disjoint set of servers. The transaction with a lower timestamp is “before” the transaction with a higher timestamp. Obviously, there may be a small amount of skew across the clocks of the different servers. Therefore, Spanner utilizes the concept of an “uncertainty” window which is based on the maximum possible time skew across the clocks on the servers in the system. After completing their writes, transactions wait until after this uncertainty window has passed before they allow any client to see the data that they wrote.
Spanner thus faces a potentially uncomfortable tradeoff. It is desirable that the uncertainty window should be as small as possible, since as it gets larger, the latency of transactions increases, and the overall concurrency of the system decreases. On the other hand, it needs to 100% sure that clock skew never gets larger than the uncertainty window (since otherwise the guarantee of consistency would no longer exist), and thus larger windows are safer than smaller ones.
Spanner handles this tradeoff with a specialized hardware solution that uses both GPS and atomic clocks to ensure a minimal clock skew across servers. This solution allows the system to keep the uncertainty window relatively narrow while at the same time keeping the probability of incorrect uncertainty window estimates (and corresponding consistency violations) to be extremely small. Indeed, the probability is so small that Spanner’s architects feel comfortable claiming that Spanner “guarantees” consistency.
[It is worth noting at this point that systems that use global consensus avoid this problem entirely. If every transaction goes through the same protocol, then a natural order of all transactions emerges --- the order is simply the order in which transactions were voted on during the protocol. When batches are used instead of transactions, it is the batches that are ordered during the protocol, and transactions are globally ordered by combining their batch identifier with their sequence number within the batch. There is no need for clock time to be used in order to create a notion of before or after. Instead, the consensus protocol itself can be used to elegantly create a global order.]
Spanner is a beautiful and innovative system. It was also invented by Google and widely used there. Either because of the former or latter (or both), it has been extremely influential, and many systems (e.g., CockroachDB and YugaByte) have been inspired by the architectural decisions by Spanner. Unfortunately, these derivative systems are software-only, which implies that they have inherited only the software innovations without the hardware and infrastructure upon which Spanner relies at Google. In light of Spanner’s decision to have separate consensus protocols per shard, software-only derivatives are extremely dangerous. Like Spanner, these systems rely on real-world time in order to enforce consistency --- CockroachDB on HLC (hybrid logical clocks) and YugaByte on Hybrid Time . Like Spanner, these systems rely on knowing the maximum clock skew across servers in order to avoid consistency violations. But unlike Spanner, these systems lack hardware and infrastructure support for minimizing and measuring clock skew uncertainty.
CockroachDB, to its credit, has acknowledged that by only incorporating Spanner’s software innovations, the system cannot guarantee CAP consistency (which, as mentioned above, is linearizability).
YugaByte, however, continues to claim a guarantee of consistency. I would advise people not to trust this claim. YugaByte, by virtue of its Spanner roots, will run into consistency violations when the local clock on a server suddenly jumps beyond the skew uncertainty window. This can happen under a variety of scenarios such as when a VM that is running YugaByte freezes or migrates to a different machine. Even without sudden jumps, YugaByte’s free edition relies on the user to set the assumptions about maximum clock skew. Any mistaken assumptions on behalf of the user can result in consistency violations.
In contrast to CockroachDB and YugaByte, FaunaDB was inspired by Calvin instead of Spanner. [Historical note: the Calvin and Spanner papers were both published in 2012]. FaunaDB therefore has a single, elegant, global consensus protocol, and needs no small print regarding clock skew assumptions. Consequently, FaunaDB is able to guarantee consistency of transactions that modify any data in the database without concern for the corner case violations that can plague software-only derivatives of Spanner-style systems.
There are other differences between Calvin-style systems and Spanner-style systems that I’ve talked about in the past . In this post we focused on perhaps the most consequential difference: global consensus vs. partitioned consensus. As with any architectural decision, there are tradeoffs between these two options. For the vast majority of applications, exceeding 500,000 transactions a second is beyond their wildest dreams. If so, then the decision is clear. Global consensus is probably the better choice.