Fast write performance, even if reads are very slo

2019-08-28 20:19发布

问题:

Sorry if there's already an answer for this, I searched for it and I didn't find exactly my scenario.

Once again is a question like "What is the fastest/best performance DB?". But since the answer depends on the scenario, my scenario is this: I want to write many logs to DB, thousands per second. But I will not read them often. Indeed 99,99% of them will never be read again, but once in a while I will need to read. Schema is not complex, just key/value. Once in a while I will read by value and I will not care at all if this read takes minutes. The correctness of the read will be critical, but not the performance.

So far it seems the best solutions are things like MongoDB, Cassandra... and perhaps the best DynamoDB?

回答1:

Any DBMS, i would say, switch to the lowest isolation level and no index. If you put that together with a good storage system, maybe a RAID 0 with SSDs. Fastes writes ever.

Its hard to say wich DBMS is best, usually you want the best dbms that is good in doing something in particular but you need a dbms that basically just write something with the least of restriction, ive heard mysql can be great in this.



回答2:

If fast writes are what you're after, you have a few options. Assuming that you will be the one to maintain the DB you can write the inserts to memory, and flush them after they get to a certain size. that way you aren't hitting the disk so many times.

If I'm not mistaken MongoDB does that already, plus if you disable the journaling it can drastically increase write performance which is exactly what you're going for.

Either way, caching and bulk inserting is the way to go with any database.