Architecture for a lot of data logging, DB or file

2019-06-24 11:49发布

I'm working on a Python app I want to be scalable to accommodate about 150 writes per second. That's spread out among about 50 different sources.

Is Mongodb a good candidate for this? I'm split on writing to a database, or just making a log file for each source and parsing them separately.

Any other suggestions for logging a lot of data?

1条回答
乱世女痞
2楼-- · 2019-06-24 12:31

I would say that mongodb very good fit for the logs collection, because of:

  1. Mongodb has amazing fast writes
  2. Logs not so important, so it's okay to loose some of them in case of server failure. So you can run mongodb without journaling option to avoid writes overhead.
  3. In additional you can use sharding to increase writes speed, in same time you can just move oldest logs to separate collection or into file system.
  4. You can easy export data from database to the json/csv.
  5. Once you will have everything in a database you will able to query data in order to find log that you need.

So, my opinion is that mongodb perfectly fit for such things as logs. You no need manage a lot of logs files in the file system. Mongodb does this for you.

查看更多
登录 后发表回答