I'm trying to understand what exactly happens internally in storage engine level when a row(columns) is inserted in a CQL style table.
CREATE TABLE log_date (
userid bigint,
time timeuuid,
category text,
subcategory text,
itemid text,
count int,
price int,
PRIMARY KEY ((userid), time) - #1
PRIMARY KEY ((userid), time, category, subcategory, itemid, count, price) - #2
);
Suppose that I have a table like above.
In case of #1, a CQL row will generate 6(or 5?) columns in storage.
In case of #2, a CQL row will generate a very composite column in storage.
I'm wondering what's more effective way for storing logs into Cassandra.
Please focus on those given two situations.
I don't need any real-time reads. Just writings.
If you want to suggest other options please refer to the following.
The reasons I chose Cassandra for storing logs are
- Linear scalability and good for heavy writing.
- It has schema in CQL. I really prefer having a schema.
- Seems to support Spark well enough. Datastax's cassandra-spark connector seems to have data locality awareness.
Let's say that I build tables with both of your PRIMARY KEYs, and INSERT some data:
Looks pretty much the same via
cqlsh
. So let's have a look from thecassandra-cli
, and query all rows fooruserid
1002:Simple enough, right? We see
userid
1002 as the RowKey, and our clustering column oftime
as a column key. Following that, are all of our columns for each column key (time
). And I believe your first instance generates 6 columns, as I'm pretty sure that includes the placeholder for the column key, because your PRIMARY KEY could point to an empty value (as your 2nd example key does).But what about the 2nd version for
userid
1002?Two columns are returned for RowKey 1002, one for each unique combination of our column (clustering) keys, with an empty value (as mentioned above).
So what does this all mean for you? Well, a few things:
category
orsubcategory
(2nd example) that you really can't unless you DELETE and recreate the row. Although from a logging perspective, that's probably ok.userid
) together, sorted by the column (clustering) keys. If you were concerned about querying and sorting your data, it would be important to understand that you would have to query for each specificuserid
for sort order to make any difference.userid
s might exceed that, you could implement a "date bucket" as an additional partition key (say, if you knew that auserid
would never exceed more than 2 billion in a year, or whatever).It looks to me like your 2nd option might be the better choice. But honestly for what you're doing, either of them will probably work ok.