I read notes about Lucene deing limited to 2Gb documents. Are there any additional limitations on the size of documents that can be indexed in Elasticsearch?
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
回答1:
Lucene uses a byte buffer internally that uses 32bit integers for addressing. By definition this limits the size of the documents. So 2GB is max in theory.
In ElasticSearch:
There is a max http request size
in the ES GitHub code, and it is set against Integer.MAX_VALUE
or 2^31-1
. So, basically, 2GB is the maximum document size for bulk indexing over HTTP. And also to add to it, ES does not process an HTTP request until it completes.
Good Practices:
- Do not use a very large java heap if you can help it: set it only as large as is necessary (ideally no more than half of the machine’s RAM) to hold the overall maximum working set size for your usage of Elasticsearch. This leaves the remaining (hopefully sizable) RAM for the OS to manage for IO caching.
- In client side, always use the bulk api, which indexes multiple documents in one request, and experiment with the right number of documents to send with each bulk request. The optimal size depends on many factors, but try to err in the direction of too few rather than too many documents. Use concurrent bulk requests with client-side threads or separate asynchronous requests.
For further study refer to these links:
1) Performance considerations for elasticsearch indexing
2) Document maximum size for bulk indexing over HTTP
标签:
elasticsearch