Tag Archives: db warehouse

Inserting and Reading data from a Cassandra Cluster

Rubber meeting the road. Time to insert some column families, then some data and finally pull it back off the stack.

First off, the keyspace was already defined, so I’m going to simply list it’s structure:

[default@unknown] describe ip_store;

Keyspace: ip_store:
  Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
  Durable Writes: true
    Options: [replication_factor:2]

With a keyspace ready for some column families, those are created next. Here I’m establishing that there will be 4 families in this single keyspace. This is contrary to suggestions in the High Performance Cassandra Handbook, but follows all other documentation I’ve seen. Considering that this is NOT a production implementation, I’m going to go with a more conventional strategy of organizing related data in the same keyspace.

The first action is to assume the desired keyspace, then add the desired column families:

[default@unknown] use ip_store;
Authenticated to keyspace: ip_store

[default@ip_store] create column family warehouse with comparator = UTF8Type;
595945d0-71ce-11e1-0000-13393ec611bf
Waiting for schema agreement...
... schemas agree across the cluster

[default@ip_store] create column family hourly with comparator = UTF8Type;
65ea2170-71ce-11e1-0000-13393ec611bf
Waiting for schema agreement...
... schemas agree across the cluster

[default@ip_store] create column family daily with comparator = UTF8Type; 
6aaeae60-71ce-11e1-0000-13393ec611bf
Waiting for schema agreement...
... schemas agree across the cluster

[default@ip_store] create column family 30day with comparator = UTF8Type;
7b85bf30-71ce-11e1-0000-13393ec611bf
Waiting for schema agreement...
... schemas agree across the cluster

OK, a basic schema has been established. Now.. to load the data. I’ll post the relevant sections of the loader code at a later date. At this point you only need to consider that the loader DOES work and it’s loading data. We’ll look at the extraction of the data following loading a very small set.

time host=10.1.0.23 port=9160 ks=ip_store cf=warehouse ttl=0 datafile=5.ips ant -DclassToRun=loader.bulkIpLoader run
Buildfile: cBuild/build.xml

init:

compile:
    [javac] Compiling 1 source file to cBuild/build/classes

dist:
      [jar] Building jar: cBuild/dist/lib/cass.jar

run:
     [java] ks       ip_store
     [java] cf       warehouse
     [java] ttl      0
     [java] datafile 5.ips

BUILD SUCCESSFUL
Total time: 1 second

Of the set, there are three unique IPs and 2 are duplicates of other data (IMPORTANT NOTE: The IP’s have been changed to protect the innocent and clueless):

2016468288	1011	suspicious	2012-03-13 18:40:01
2016468288	1011	suspicious	2012-03-13 18:40:02
3149138705	1011	suspicious	2012-03-13 18:40:00
3149138705	1011	suspicious	2012-03-13 18:40:01
2179293112	1011	suspicious	2012-03-13 18:39:59

Having loaded these, I re-launch the command line interface, authenticate to the desired keyspace, and then a VERY important command to set an assumption about how we’re going to reference the keys. If you get a strange error like this “cannot parse ‘187.180.11.17’ as hex bytes“, that means you likely forgot to issue the assumes command. Commands I issued are in bold.

cass
Connected to: "ak-ip" on 10.1.0.23/9160
Welcome to Cassandra CLI version 1.0.8

[default@unknown] use ip_store

[default@ip_store] assume warehouse keys as utf8;   
Assumption for column family 'warehouse' added successfully.

[default@ip_store]  get warehouse['3149138705'];

=> (column=2012-03-13 18:40:00, value=7b227265706f72746564223a22323031322d30332d31332031383a34383a3031222c22617474726962757465223a22737573706963696f7573222c2270726f705f6964223a2231303131222c2270726f7065727479223a22426f74204375747761696c222c226465746563746564223a22323031322d30332d31332031383a34303a3030222c226d65746164617461223a22222c226970223a223139302e3137342e3235312e313435227d, timestamp=1332168862658)
=> (column=2012-03-13 18:40:01, value=7b227265706f72746564223a22323031322d30332d31332031383a34383a3031222c22617474726962757465223a22737573706963696f7573222c2270726f705f6964223a2231303131222c2270726f7065727479223a22426f74204375747761696c222c226465746563746564223a22323031322d30332d31332031383a34303a3031222c226d65746164617461223a22222c226970223a223139302e3137342e3235312e313435227d, timestamp=1331689681)
Returned 2 results.
Elapsed time: 39 msec(s).

There we go. A single key row ip_store[‘warehouse’][‘3149138705’] containing to column records, each with a JSON blob within it. Now.. the next step, to set the assumption of utf8 when recalling the records and get output mere mortals such as yourselves can understand.

[default@ip_store] assume warehouse validator as ascii; 
Assumption for column family 'warehouse' added successfully.

[default@ip_store]  t warehouse['3149138705'];  
     
=> (column=2012-03-13 18:40:00, 
  value={
   "reported":"2012-03-13 18:48:01",
   "attribute":"suspicious",
   "prop_id":"1011",
   "detected":"2012-03-13 18:40:00",
   "ip":"187.180.11.17"
  }, timestamp=1331689680)

=> (column=2012-03-13 18:40:01, 
  value={
    "reported":"2012-03-13 18:48:01",
    "attribute":"suspicious",
    "prop_id":"1011",
    "detected":"2012-03-13 18:40:01",
     "ip":"187.180.11.17"
  }, timestamp=1331689681)

Returned 2 results.
Elapsed time: 2 msec(s).

There is it! Data written, data read. Now, it’s up to you to think about how you might use this simple, flexible and powerful storage engine to solve your business needs.

Apache Cassandra Project – processing “Big Data”

Being an old-school OSS’er, MySql has been my go-to DB for data storage since the turn of the century. It’s great, I love it (mostly) but it does have it’s drawbacks. Largest of which is it’s now owned by Oracle which does a HORRIBLE JOB of supporting it. I have personal experience with this, as the results of a recent issue with InnoDB and MySQL.

In the mean time, some of the hot-shot up-and-commers in another department have been facing their own data processing challenges (with MySql and other DB’s), and have started to look at some highly scalable alternatives. One of the front-runners right now is Apache’s Cassandra database project.

The synopsis from the page is (as would be most marketing verbiage) very encouraging!

The Apache Cassandra database is the right choice when you need scalability and high availability without compromising performance. Linear scalability and proven fault-tolerance on commodity hardware or cloud infrastructure make it the perfect platform for mission-critical data. Cassandra’s support for replicating across multiple datacenters is best-in-class, providing lower latency for your users and the peace of mind of knowing that you can survive regional outages.

This sounds too good to be true. Finally a solution that we might be able to implement and grow, and one that doe not have the incredibly frustrating drawback of InnoDB and MySql’s fragile replication architecture. I’ve found out exactly how fragile it is, despite have a cluster of high-speed specially designed DB servers, the amount of down time we had was NOT ACCEPTABLE!).

With a charter to handle ever growing amounts of data and the need for ultimate availability and reliability, an alternative to MySQL is almost certainly required.

Of the items discussed on the main page, this one really hits home and stands out to me:

Fault Tolerant

Data is automatically replicated to multiple nodes for fault-tolerance. Replication across multiple data centers is supported. Failed nodes can be replaced with no downtime.

I recently watched a video from the 2011 Cassandra Conference in San Francisco. A lot of good information shared. This video is available on the Cassandra home page. I recommend muscling through the marketing BS as the beginning and take in what they cover.

Job graph for ‘Big Data’ is skyrocketing.

Demand for Cassandra experts is also skyrocketing.

Big data players are using Cassandra.

It’s a known issue that RDBM’s (ex. MySql) have serious limitations (no kidding).

RDBM’s generally have an 8GB cache limit (this is interesting, and would explain some issues we’ve had with scalability in our DB severs, which have 64GB of memory).

The notion that Cassandra does not have good read speed, is a fallacy. Version 0.8 read speed is at parity of the already considered fast 0.6 write speed. Fast!?

No global or even low-level write locks. The column swap architecture alleviates the need for these locks, this allows high-speed writes.

Quorum reads and writes are consistent across the distribution.

New feature of local LOCAL_QUORUM allows quorums to be established from only the local nodes, alleviating latency waiting for a quorum including remote nodes in other geographic locations.

Cassandra uses XML files for schema modifications. In version 0.7 provides new features to allow on-line schema updates.

CLI for Cassandra is now very powerful.

Has a SQL language capability (yes!).

Latest version provides much easier to implement secondary indexing (indexes other than the primary).

Version 0.8 supports bulk loading. This is very interesting for my current project

There is wide support for Cassandra in both interpreted and compiled OSS languages, including the ones I most frequently use.

CQL Cassandra Query Language.

Replication architecture is vastly superior to MySQLs transaction and log replay strategy. Cassandra uses an rsync style replication where hash comparisons are exchanged to find which parts of the data tree a given replication node (that is responsible for that tree of data) might need updating, then then transferring just that data. Not only does this reduce bandwidth, but this implies asynchronous replication! Finally! Now this makes sense to me!!

Hadoop support exists for Cassandra, BUT, it’s not a great fit for Cassandra. Look into Brisk if Hadoop implementation is desired or required.

Division of Real-Time and Analytics nodes.

Nodes can be configured to communicate with each other in an encrypted fashion, but in general inter-node communication across public-private networks should be established using VPN tunnels.

This needs further research, but it’s very, VERY promising!

NEXT: “Cassandra and Big Data – building a single-node ‘cluster’