Search
Close this search box.
Datastax / Cassandra Data Modeling Strategies

Common Problems in Cassandra Data Models

In our opinion, Cassandra is one of the best nosql database technologies we’ve used for high availability, large scale, and high-speed business platforms, More specifically, we work with Datastax Enterprise version for Cassandra where the clients are above a certain size and need to have enterprise-grade support 24/7 365 days a year with expertise around the world. There are many topics in which I could have written about as my first “Cassandra” post on our blog, but decided to write about what I call the three stooges of Cassandra data modeling: Larry (Tombstones), Curly (Data Skew), and Moe (Wide Partitions).

  1. Wide Partitions. A partition is the fundamental unit of replication in Cassandra. A wide partition means that data is collecting in a large bucket rather than smaller ones. Partitions should not be bigger than 100MB. Some of the current ones are 8GB. Most of the bad ones are between 150-500MB. I’ve seen some partitions as big as 64GB. Read this about calculating partition size.
  2. Data Skew. Data skew occurs when the algorithm that decides where to place data using a “partitioner” starts to put too much information in a few nodes rather than spreading it around the cluster. If the ONLY key for the data is CountryCode or StateCode e.g. “US” or “VA” then all the data will be on one partition and only present on the number of nodes equal to the replication factor. When the data is read/written only those servers are bombarded. Read more about physical key optimizations.
  3. Tombstones. Tombstones are Cassandra’s way of being efficient with writes. It deletes data after the fact in compactions. When data is deleted, or a null value is inserted/ updated in an existing table, a tombstone record is added. When under normal levels, they are not a problem. They become problematic when tremendous amounts of tombstones accumulating together because of frequent updates that add nulls, wide partitions with many rows in one partition, or due to data skew.  What Cassandra does on a read is to reconcile the immutable writes into the current state of the data. This takes memory and computation and is magnified even more if the partition is large.
  4. Read more about Tombstones. and other tombstone common issues.

The aforementioned issues are the most common culprits for Cassandra / non-relational data structures. The ONLY way to resolve them is to fix the schema and the code that is talking to that schema because changing a primary key isn’t really possible using an ALTER statement in CQL.

Generally speaking, you should try to follow these strategies to avoid the Three Stooges of Cassandra that come to wreak havoc on your business-critical system.

  1. Create better data models w/ keys that are likely to get distributed around the ring. The training materials at Datastax Academy are a good source of understanding this. I highly recommend taking DS220 – Data Modeling.
  2. Do not insert / update / upsert with null values. Use string nulls or no values at all.
  3. Redistribute data with a bucketing/synthetic sharding when Wide Partitions show up. Read Ryan Svilha’s post on Synthetic Sharding.

You can also check out this Slidedeck on Slideshare.net for a presentation I gave at the Cassandra / Datastax DC Meetup.