NewSQL Databases

By Amitai B.

Oct 5, 2016

SQL or NoSQL? How about NewSQL?!

Fifteen years ago, one database was enough to hold all the application’s data. Over the years, applications started to collect more and more data, from more people, devices, events, etc. A single database could not hold this amount of data and as usage increased, it needed to be scaled.

There are two ways to scale a system:

  • Vertical scaling (scaling-up) – This is adding more resources to the computer such as memory, CPUs and network adapters.
  • Horizontal scaling (scaling-out) – This involves adding more servers with the same or less computing power.

Because the traditional databases (RDBMS) such as MySQL, Oracle or SQL Server could not scale-out, companies needed to scale them up. They did this by using monster computers that cost a lot of money.

This was the main reason for using NoSQL databases such as MongoDB, Cassandra and others. They offered a scale-out solution on commodity hardware which lowered the price.

But those databases needed to make compromises in order to support scaling-out. To understand those compromises I will describe two concepts, the CAP theorem and ACID.

CAP theorem

According to the “CAP theorem,” any database can only support two of the three:

  • Consistency – Every read would get you the most recent write.
  • Availability – Every request receives a response, without a guarantee that it contains the most recent version of the information.
  • Partition tolerance – The system continues to operate despite arbitrary partitioning due to network failures.

For example, MongoDB is “CP” (Consistency + Partition tolerance), Cassndra is “AP,” and RDBMS are “CA.”

ACID

ACID (Atomicity, Consistency, Isolation, Durability) is a set of properties of database transactions:

  • Atomicity – Each transaction be “all or nothing.”
  • Consistency – Any transaction will bring the database from one valid state to another.
  • Isolation – Ensures that the concurrent execution of transactions are executed serially, one after the other.
  • Durability – Ensures that once a transaction has been committed, it will remain in the database.

The ACID properties allow the developer to be sure of the correctness of the data in any transaction or calculation, at any given time. There are no surprises, missing data, or problems of data integrity.

All RDBMS support ACID and support the “C” in the CAP theorem. That is why they are so reliable. On the other hand, NoSQL databases do not support ACID. Some NoSQL such as MongoDB support a different model of consistency, usually eventually consistency. It means that consistency is guaranteed but not in any given time. It is fine for many usages but not for others (think about a finance transaction or even the order of comments in a blog post).

To overcome it, companies had the following options:

  • Use two (or more) databases. RDBMS for the data that needed high data integrity, and the other for the rest of the data such as streaming of events, data for analytics, etc.
  • Use NoSQL database and implement the ACID layer on top of it in the application server.

Both of the solutions may cause problems. To synchronize databases is not an easy job and also to maintain them. By Developing a transactional layer for a software developer that is not skilled in this can cause bugs in a very sensitive area of the application, and also hurt the performance.

This is the background for the invention of NewSQL databases.

NewSQL databases

The NewSQL databases can solve many issues. The term was coined by the 451 Group in their now famous report, “NoSQL, NewSQL and Beyond.” The NewSQL databases support ACID, and they are distributed, can scale-out, and be able to handle a large volume of data with great performances.

NewSQL types

There are three types of NewSQLs:

  • New architectures – Databases that were designed to operate in a distributed cluster (Google Spanner, Clustrix, VoltDB, MemSQL).
  • SQL engines – These are highly optimized storage engines for SQL (MySQL Cluster, Infobright, TokuDB).
  • Transparent sharding – This provides a sharding middle ware layer to automatically split databases across multiple nodes (ScaleBase).

I will describe two different approaches: VoltDB that is build with new architecture and scaling-out Postgres.

VoltDB

VoltDB is an in-memory database. It is an ACID-compliant RDBMS which uses a shared nothing architecture. One of VoltDB designers is Michael Stonebraker, and many concepts he derived from his academic research such as shared nothing architecture were implemented in this database. In his article, “OLTP Through the Looking Glass, and What We Found There,” Stonebraker and his colleges explain why concepts regarding database architecture that were right at the ’70s are no longer valid, especially because of the in-memory capability. He claims that new databases should use new modern architecture that fits today’s challenges. VoltDB is built with these concepts.

In-memory database

VoltDB is an in-memory database what makes it work faster than on-disk databases. How can it support ACID, or to be more specific, how can it support durabilty? The latter refers to after the transaction is committed, it is ensured that the data will not be lost.

There are few ways that in-memory databases can support durability:

  • Non-Volatile DIMM – RAM that contains non-volatile memory.
  • Non-volatile random access memory (NVRAM) – RAM that is backed up with a battery.
  • Transaction logging – All the transactions are being logged and if there is a crash it can be recovered.
  • High availability – The data is replicated between the nodes, and should be combined with other methods to insure safe durability.
  • Snapshots – Every once in awhile a snapshot of the RAM is persisted to the disk.

VoltDB uses snapshot and transaction logging to ensure durability.

Shared nothing architecture (SN)

VoltDB uses a shared-nothing architecture. Shared-nothing architecture (SN) is a distributed computing architecture in which each node is independent and self-sufficient; instead of central entity that is in control. That way, VoltDB can achieve database parallelism. There is no single point of failure and there are no bottle necks. The data is handled independently by each CPU core. It allows VoltDB to scale by increasing cores.

Partitioning and scaling out

VoltDB manages the partitions itself. VoltDB partitions both the data and the transactions that access that data. So multiple transactions can be processed in parallel. Throughput is easily scaled by increasing either the number of partitions per node or the number of nodes in the cluster.
If a node needs a data from another partition, it can obtain it but it will hurt the performance.

Single threaded

Because VoltDB doesn’t use the disk in its transactions, it doesn’t need to run multi-threaded. One of the main reasons that RDBMS databases run multi-threaded is because it writes the data to the disk, and multi-threading is an efficient way to handle the disk wait. Every core runs single thread which eliminates the need for locking and latching.

Stored procedure interface for transactions

VoltDB uses stored procedures for transactions, stored procedures are atomic, which means they either succeed or fail and rollback as a whole. It boosts the performance because in other RDBMS a lot of time is wasted on network delays because data goes back and forth from the database to the application server several times. Another advantage of stored procedures is that they are pre-compiled. VoltDB supports writing stored procedures in SQL and also in Java for more complex procedures.

Scale-out Postgres

A different approach to building new database with new architecture is using an existing database with a mature and proven stability, and adding to it the scale-out feature. This is done in many RDBMS. For now, I will focus on Postgres.

There are many approaches available to scale PostgreSQL. There is no built-in solution embedded in Postgres yet, but this effect can be achieved by using Postgres features by third-party tools.
Greenplum, Postgres-XL and more are just some of these tools. For now, I will describe how Citusdata does it.

Citus database

Citus is a distributed database that combines the low-latency requests that power real-time applications with high throughput, interactive analytics on billions of events.

Citus does this by extending PostgreSQL to distribute your workload across multiple hosts, leveraging the memory, storage, and processing power of multiple machines.

It distributes the data across a cluster of commodity servers. Incoming SQL queries are then processed in parallel across these servers.

Citus architecture

One of the PostgreSQL instances in the cluster is the Citus master. The master stores metadata about the shards, and the worker PostgreSQL instances (Citus workers). All interaction with the database is done with the master through standard PostgreSQL API. All the data is distributed across the workers. Each table is logical “sharded” which means it is spread between shards according to the selected column. It enables the scale-out feature. Each shard is replicated on at least two of the workers. As a result, the loss of a single machine does not impact data availability.

The master receives the query, distributes it to the relevant workers according to the metadata, and then merges their results, and returns to the user. If a worker fails mid-query, Citus completes the query by re-routing the failed portions of the query to other workers which have a copy of the shard.

Conclusion

NewSQL databases allow both data integrity and scalability at a low cost. They can either be built with new architecture that stored the data, or can expand the capabilities on existing databases.

What lies ahead? I believe that the traditional RDBMS will adopt the concepts of NewSQL by implementing them or by acquiring technologies that integrate them into their products. In the future, all databases will support scaling-out and distribution.

This will reduce the market share of the popular NoSQL databases, and the dominance of the major players will remain as it is today. Although the decreasing price of memory and CPUs can make a single machine strong enough to handle big data, the train has already left the station and the competition between the companies will drive them to support distribution and scale-out capabilities.

References

Leave a Reply

Your email address will not be published.