What's a suitable storage RDBMS,NoSQL, for cac

2019-08-07 10:13发布

We're in the process of building an internal, Java-based RESTful web services application that exposes domain-specific data in XML format. We want to supplement the architecture and improve performance by leveraging a cache store. We expect to host the cache on separate but collocated servers, and since the web services are Java/Grails, a Java or HTTP API to the cache would be ideal.

As requests come in, unique URI's and their responses would be cached using a simple key/value convention, for example...

KEY                                            VALUE
http://prod1/financials/reports/JAN/2007   --> XML response of 50Mb
http://prod1/legal/sow/9004                --> XML response of 250Kb

Response values for a single request can be quite large, perhaps up to 200Mb, but could be as small as 1Kb. And the number of requests per day is small; not more than 1000, but averaging 250; we don't have a large number of consumers; again, it's an internal app.

We started looking at MongoDB as a potential cache store, but given that MongoDB has a max document size of 8 or 16Mb, we did not feel it was the best fit.

Based on the limited details I provided, any suggestions on other types of stores that could be suitable in this situation?

3条回答
forever°为你锁心
2楼-- · 2019-08-07 10:50

Twitter's engineering team just blogged about their SpiderDuck project that does something like what you're describing. They use Cassandra and Scribe+HDFS for their backends.

http://engineering.twitter.com/2011/11/spiderduck-twitters-real-time-url.html

查看更多
叛逆
3楼-- · 2019-08-07 10:56

The way I understand your question, you basically want to cache the files, i.e. you don't need to understand the files' contents, right?

In that case, you can use MongoDB's GridFS to cache the xml as a file. This way, you can smoothly stream the file in and out of the database. You could use the URI as a 'file name' and, well, that should do the job.

There are no (reasonable) file size limits and it is supported by most, if not all, of the drivers.

查看更多
叛逆
4楼-- · 2019-08-07 10:58

The simplest solution here is just caching these pieces of data in a file system. You can use tmpfs to ensure everything is in the main memory or any normal file system if you want the size of your cache be larger than the memory you have. Don't worry, even in the latter case the OS kernel will efficiently cache everything that is used frequently in the main memory. Still you have to delete the old files via cron if you're using Linux.

It seems to be like an old school solution, but it could be simpler to implement and less error prone than many others.

查看更多
登录 后发表回答