Is GridFS fast and reliable enough for production?

2019-01-08 03:31发布

问题:

I develop a new website and I want to use GridFS as storage for all user uploads, because it offers a lot of advantages compared to a normal filesystem storage.

Benchmarks with GridFS served by nginx indicate, that it's not as fast as a normal filesystem served by nginx.

Benchmark with nginx

Is anyone out there, who uses GridFS already in a production environment, or would use it for a new project?

回答1:

I use gridfs at work on one of our servers which is part of a price-comparing website with honorable traffic stats (arround 25k visitors per day). The server hasn't much ram, 2gigs, and even the cpu isn't really fast (Core 2 duo 1.8Ghz) but the server has plenty storage space : 10Tb (sata) in raid 0 configuration. The job the server is doing is very simple:

Each product on our price-comparer has an image (there are around 10 million products according to our product db), and the servers job is to download the image, resize it, store it on gridfs, and deliver it to the visitors browser... if it's not present in the grid... or... deliver it to the visitors browser if it's already stored in the grid. So, this could be called as a 'traditional cdn schema'.

We have stored and processed 4 million images on this server since it's up and running. The resize and store stuff is done by a simple php script... but for sure, a python script, or something like java could be faster.

Current data size : 11.23g

Current storage size : 12.5g

Indices : 5

Index size : 849.65m

About the reliability : This is very reliable. The server doesn't load, the index size is ok, queries are fast

About the speed : For sure, is it not fast as local file storage, maybe 10% slower, but fast enough to be used in realtime even when the image needs to be processed, which is in our case, very php dependant. Maintenance and development times have also been reduced: it became so simple to delete a single or multiple images : just query the db with a simple delete command. Another interesting thing : when we rebooted our old server, with local file storage (so million of files in thousands of folders), it sometimes hangs for hours cause the system was performing a file integrity check (this really took hours...). We do not have this problem any more with gridfs, our images are now stored in big mongodb chunks (2gb files)

So... on my mind... Yes, gridfs is fast and reliable enough to be used for production.



回答2:

As mentioned, it might not be as fast as an ordinary filesystem but then it gives you man advantages over ordinary filesystems which I think are worth giving up a bit speed for.

Ultimately, with sharding, you might reach a point however where the GridFS storage actually becomes the faster option as opposed to an ordinary filesystem and a single node.



回答3:

mdirolf's nginx-gridfs module is great and fairly easy to get setup. We're using it in production at paint.ly to serve all of the paintings and there have been no problems so far.



回答4:

Heads-up on repairs for larger DBs though - a new system we're developing, mongo didn't cleanly exit, and repairing the 7TB GridFS looks like it will take 130 hrs.

Because of this, I think I'll look at switching to OpenStack Swift or Ceph. Still, until then it was good. And the nginx-gridfs module is sweet.



回答5:

I don't recommend using gridfs unless you know what you are doing. GridFS is just abstraction layer which splits files for chunks and stores the files in two collections. More files - more overhead. If you expect files be pretty the same size, not exceeding 32M or so - you are in the right way. Do not try to store large files on gridfs. Why?

  1. Drivers on different languages may read the whole file.(e.g. chunks) when reading the little part of the file.
  2. Modifying the file may affect all chunks and increase database load If your file system is growing up, you will have to decide to shard the gridfs. Be careful! Consistence is not guaranteed when sharding is initializing!

If you think about read loaded project - consider loading the files into docs directly (if 16M or less size) or choose another clusterfs, and link filename/inode to your logic.

Hope this helps.