Compressing floating point data

2019-03-08 11:56发布

Are there any lossless compression methods that can be applied to floating point time-series data, and will significantly outperform, say, writing the data as binary into a file and running it through gzip?

Reduction of precision might be acceptable, but it must happen in a controlled way (i.e. I must be able to set a bound on how many digits must be kept)

I am working with some large data files which are series of correlated doubles, describing a function of time (i.e. the values are correlated). I don't generally need the full double precision but I might need more than float.

Since there are specialized lossless methods for images/audio, I was wondering if anything specialized exists for this situation.

Clarification: I am looking for existing practical tools rather than a paper describing how to implement something like this. Something comparable to gzip in speed would be excellent.

7条回答
贼婆χ
2楼-- · 2019-03-08 12:48

Since you're asking for existing tools, maybe zfp will do the trick.

查看更多
登录 后发表回答