Eventual Consistency and 5G Won’t Play Well Together

eventual consistency and 5G

Eventual Consistency and 5G Won’t Play Well Together

September 17, 2020

5G has a headline requirement for 1ms latencies. Legacy RDBMS products can’t come anywhere near this. Many newer database products will face serious problems because they use Eventual Consistency. Why?

Latency

In order to provide scalability and resilience, new DB technologies keep multiple copies of data on different physical servers. As a result, the loss of a single server should not lead to the loss of data. In order to make implementation simpler, a lot of these databases work on the assumption that all of these copies are absolutely equal. You can read from any one of them. You can also write to whichever copy you feel like, and some internal mechanism will keep changes in sync. Victory is declared when ‘enough’ of them have accepted the change. What is ‘enough’? Well, knowing you’ve written to a single node that will then tell the others is a good start, and is fast. But what if that node dies before it passes on your change? Well, to avoid that you could insist on the change being written to a majority of nodes. Waiting till the first node has finished a write and then waiting for it to tell all the other nodes and ‘enough’ of them have acknowledged the change to go back to the client takes time. In 5G you don’t have much of that.

Accurate Reads

In an Eventually Consistent system it’s possible to read an old version of a data item, such as a customer’s remaining balance. It could be that it’s already been updated on 3 out of the 5 nodes that hold it, but you asked one of the other 2. You can therefore take decisions based on incorrect information. It is possible to avoid this by asking all the nodes that hold the data, but like the example above, this will take time.

Reliable Writes

Another side effect of the ‘Anyone can change anything’ model is that if your Eventually Consistent database is configured to declare writes finished when less than half the nodes holding data acknowledge the write, there’s a real possibility that a conflicting write could just have succeeded in the other half of the clusters, and there is now another client out there acting on the assumption their write worked. Under the covers the database will reconcile this by deciding that one of the writes won, usually based on reported timestamps. The losing Write might vanish, which is interesting if you’re the losing client process. This can be avoided, but only by involving more than 51% of the nodes in each write.

While these problems may seem esoteric, at high volumes they can be expected to happen more or less continuously.

How does Volt Active Data handle this?

Volt Active Data is an immediately consistent database. As requests come in we send them to logical divisions called Partitions, each of which handle a subset of data. The Partition Leader gets the request, passes it to its clones, and then everyone does it at more or less the same time. As results come back in the Partition Leader compares them with its own, and then sends them back to the client if they all agree. As a consequence it’s not possible for Volt Active Data to either allow conflicting writes or inaccurate reads. Just as importantly because the actual work for an individual request is being done on all the hosts that own the data at the same time we do all this really, really quickly and can operate within 5G’s latency envelope.

  • 184/A, Newman, Main Street Victor
  • info@examplehigh.com
  • 889 787 685 6