No-Wait Design - VoltDB
page-template-default,page,page-id-11307,page-child,parent-pageid-14661,mkd-core-1.0,highrise-ver-1.0,,mkd-smooth-page-transitions,mkd-ajax,mkd-grid-1300,mkd-blog-installed,mkd-header-standard,mkd-sticky-header-on-scroll-up,mkd-default-mobile-header,mkd-sticky-up-mobile-header,mkd-dropdown-slide-from-bottom,mkd-dark-header,mkd-header-style-on-scroll,mkd-full-width-wide-menu,mkd-header-standard-in-grid-shadow-disable,mkd-search-dropdown,wpb-js-composer js-comp-ver-6.1,vc_responsive

No-Wait Design

One Thread – One Task at a Time

There are many ways to make the cost of concurrency lower, but few met the team’s needs. To go 10X faster, concurrency costs needed to be all but eliminated. Since there’s only one way to do that, the choice was obvious.

All data operations in VoltDB are single-threaded, running each operation to completion before starting the next. Fancy concurrent B-Trees and lockless data structures are replaced by simple data structures with no thread safety or concurrent access costs. Not only does this meet speed goals, it’s actually much simpler to build, test and modify, a double-win.

Note that this choice is only really possible with a memory-centric design. The latency of reading and writing from disk is often hidden by shared memory multi-threading, but when running single-threaded, that latency will cause the CPU to spend most of its time idle. Eliminating disk waits allowed us to keep the CPU saturated, and run rings around traditional architectures.

No Waiting on Users

As an operational store, the VoltDB “operations” in question are actually full ACID transactions, with multiple rounds of reads, writes and conditional logic. If the system is going to run transactions to completion, one after another, disk latency isn’t the only stall that must be eliminated; it is also necessary to eliminate waiting on the user mid-transaction.

That means external transaction control is out – no stopping a transaction in the middle to make a network round-trip to the client for the next action. The team made a decision to move logic server-side and use stored procedures.

There are some drawbacks here. External transaction control can make certain problems easier. Splitting business logic between the client and server requires additional organization. Also, most object-relational-mappers (ORMs) rely on external transaction control.

There are no perfect answers to these questions. The good news is these problems apply to any system designed to scale. One of the common ways to make any RDBMS faster is to reduce contention by eliminating external transaction control. At scale, the system shouldn’t check in with the client over a network while it’s holding locks. And while there are many criticisms of stored procedures, VoltDB addresses many of them. Rather than product-specific SQL-derived languages, VoltDB uses Java as its procedure language. Java is fast, familiar, and has a huge ecosystem. It’s even possible to step through stored procedure code in an Eclipse IDE.

Much has been written about ORMs, which are famous for impeding scale. They make it easy to build large complex applications that work for low to moderate throughput, but quickly become the point of contention at scale.

At a fundamental level, stored procedures move code to the data, and external transaction control moves data to the code. One of these things is much bigger than the other, so that’s why VoltDB was designed to move code, not data.

Concurrency through Scheduling, Not Shared Memory

The team building VoltDB solved the concurrency problem by doing one thing at a time using a single-threaded pipeline. But how does that reconcile with multi-core hardware?

First, the decision was made to have VoltDB scale to multiple machines. The team needed to figure out a way to manage multiple single-threaded pipelines across a cluster. To deal with multi-core, why not simply move the abstraction? Treat four machines with four cores each as sixteen machines. Each core gets a single-threaded pipeline, and build the system to manage them in aggregate. The database thus can scale to lots of cores, lots of nodes or both. All the while, there are no concurrent data structures, locks or other expensive synchronization in the data management layer.

The next goal was to keep the pipelines full. VoltDB does this by partitioning the data. Say a finance app is partitioned by ticker symbol; now all operations on “MSFT” can be directed toward the appropriate pipeline. If an app is partitioned by customer, an operation on a specific customer’s data can be routed easily to the right pipeline. It’s also possible to operate on data across pipelines by breaking the work up into per-pipeline chunks and processing per-pipeline results into global results. For more details on how transactions work in a partitioned environment.

VoltDB calls this approach “concurrency through scheduling.” There are still threads and locks in the system, mostly to support networking and scheduling transactions, but all SQL and Java procedure logic is single-threaded and blindingly fast. The locks are there to schedule things that are low-contention; they protect small scheduling data structures, rather than user data for a transaction.


Continue reading about: