Storing time-series data, relational or non?

2019-01-15 23:22发布

I am creating a system which polls devices for data on varying metrics such as CPU utilisation, disk utilisation, temperature etc. at (probably) 5 minute intervals using SNMP. The ultimate goal is to provide visualisations to a user of the system in the form of time-series graphs.

I have looked at using RRDTool in the past, but rejected it as storing the captured data indefinitely is important to my project, and I want higher level and more flexible access to the captured data. So my question is really:

What is better, a relational database (such as MySQL or PostgreSQL) or a non-relational or NoSQL database (such as MongoDB or Redis) with regard to performance when querying data for graphing.

Relational

Given a relational database, I would use a data_instances table, in which would be stored every instance of data captured for every metric being measured for all devices, with the following fields:

Fields: id fk_to_device fk_to_metric metric_value timestamp

When I want to draw a graph for a particular metric on a particular device, I must query this singular table filtering out the other devices, and the other metrics being analysed for this device:

SELECT metric_value, timestamp FROM data_instances
    WHERE fk_to_device=1 AND fk_to_metric=2

The number of rows in this table would be:

d * m_d * f * t

where d is the number of devices, m_d is the accumulative number of metrics being recorded for all devices, f is the frequency at which data is polled for and t is the total amount of time the system has been collecting data.

For a user recording 10 metrics for 3 devices every 5 minutes for a year, we would have just under 5 million records.

Indexes

Without indexes on fk_to_device and fk_to_metric scanning this continuously expanding table would take too much time. So indexing the aforementioned fields and also timestamp (for creating graphs with localised periods) is a requirement.

Non-Relational (NoSQL)

MongoDB has the concept of a collection, unlike tables these can be created programmatically without setup. With these I could partition the storage of data for each device, or even each metric recorded for each device.

I have no experience with NoSQL and do not know if they provide any query performance enhancing features such as indexing, however the previous paragraph proposes doing most of the traditional relational query work in the structure by which the data is stored under NoSQL.

Undecided

Would a relational solution with correct indexing reduce to a crawl within the year? Or does the collection based structure of NoSQL approaches (which matches my mental model of the stored data) provide a noticeable benefit?

10条回答
虎瘦雄心在
2楼-- · 2019-01-16 00:14

This is a problem we've had to solve at ApiAxle. We wrote up a blog post on how we did it using Redis. It hasn't been out there for very long but it's proving to be effective.

I've also used RRDTool for another project which was excellent.

查看更多
ゆ 、 Hurt°
3楼-- · 2019-01-16 00:19

Create a file, name it 1_2.data. weired idea? what you get:

  • You save up to 50% of space because you don't need to repeat the fk_to_device and fk_to_metric value for every data point.
  • You save up even more space because you don't need any indices.
  • Save pairs of (timestamp,metric_value) to the file by appending the data so you get a order by timestamp for free. (assuming that your sources don't send out of order data for a device)

=> Queries by timestamp run amazingly fast because you can use binary search to find the right place in the file to read from.

if you like it even more optimized start thinking about splitting your files like that;

  • 1_2_january2014.data
  • 1_2_february2014.data
  • 1_2_march2014.data

or use kdb+ from http://kx.com because they do all this for you:) column-oriented is what may help you.

There is a cloud-based column-oriented solution popping up, so you may want to have a look at: http://timeseries.guru

查看更多
▲ chillily
4楼-- · 2019-01-16 00:21

Found very interesting the above answers. Trying to add a couple more considerations here.

1) Data aging

Time-series management usually need to create aging policies. A typical scenario (e.g. monitoring server CPU) requires to store:

  • 1-sec raw samples for a short period (e.g. for 24 hours)

  • 5-min detail aggregate samples for a medium period (e.g. 1 week)

  • 1-hour detail over that (e.g. up to 1 year)

Although relational models make it possible for sure (my company implemented massive centralized databases for some large customers with tens of thousands of data series) to manage it appropriately, the new breed of data stores add interesting functionalities to be explored like:

  • automated data purging (see Redis' EXPIRE command)

  • multidimensional aggregations (e.g. map-reduce jobs a-la-Splunk)

2) Real-time collection

Even more importantly some non-relational data stores are inherently distributed and allow for a much more efficient real-time (or near-real time) data collection that could be a problem with RDBMS because of the creation of hotspots (managing indexing while inserting in a single table). This problem in the RDBMS space is typically solved reverting to batch import procedures (we managed it this way in the past) while no-sql technologies have succeeded in massive real-time collection and aggregation (see Splunk for example, mentioned in previous replies).

查看更多
一夜七次
5楼-- · 2019-01-16 00:21

I face similar requirements regularly, and have recently started using Zabbix to gather and store this type of data. Zabbix has its own graphing capability, but it's easy enough to extract the data out of Zabbix's database and process it however you like. If you haven't already checked Zabbix out, you might find it worth your time to do so.

查看更多
登录 后发表回答