Executing a LOGGED BATCH warning in Cassandra logs

2019-08-29 08:46发布

问题:

Our Java Application doing a batch inserts on 1 of the table, That table schema is something like..

CREATE TABLE "My_KeySpace"."my_table" (
    key text,
    column1 varint,
    column2 bigint,
    column3 text,
    column4 boolean,
    value blob,
    PRIMARY KEY (key, column1, column2, column3, column4)
) WITH CLUSTERING ORDER BY ( column1 DESC, column2 DESC, column3 ASC, column4 ASC )
AND COMPACT STORAGE
AND bloom_filter_fp_chance = 0.1
AND comment = ''
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 0
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = 'NONE'
AND caching = {
    'keys' : 'ALL',
    'rows_per_partition' : 'NONE'
}
AND compression = {
    'chunk_length_in_kb' : 64,
    'class' : 'LZ4Compressor',
    'enabled' : true
}
AND compaction = {
    'class' : 'LeveledCompactionStrategy',
    'sstable_size_in_mb' : 5
};

gc_grace_seconds = 0 in above schema. Because of this I am getting following warning:

2019-02-05 01:59:53.087 WARN   [SharedPool-Worker-5 - org.apache.cassandra.cql3.statements.BatchStatement:97] Executing a LOGGED BATCH on table [My_KeySpace.my_table], configured with a gc_grace_seconds of 0. The gc_grace_seconds is used to TTL batchlog entries, so setting gc_grace_seconds too low on tables involved in an atomic batch might cause batchlog entries to expire before being replayed.

I have seen Cassandra code, this warning is there for obvious reasons at: this line

Any solution without changing batch code in application?? Should I increase gc_grace_seconds?

回答1:

In Cassandra, batches aren't the way to optimize inserts into database - they are usually used mostly for coordinating writing into multiple tables, etc. If you're using the batches for insertion into multiple partitions, you're even get worse performance.

The better throughput for inserts you can get from using asynchronous commands execution (via executeAsync), and/or by using batches but only for inserts that are targeting the the same partition.