I noticed that elasticsearch consumed over 30GB of disk space over night. By comparison the total size of all the logs I wanted to index is only 5 GB...Well, not even that really, probably more like 2.5-3GB. Is there any reason for this and is there a way to re-configure it? I'm running the ELK stack.
相关问题
- I want to trace logs using a Macro multi parameter
- Error message 'No handlers could be found for
- convert logback.xml to log4j.properties
- Django management command doesn't show logging
- apache modules ap_log_perror is at a different lev
相关文章
- es 单字段多分词器时,textField.keyword无法高亮
- how do I log requests and responses for debugging
- Use savefig in Python with string and iterative in
- ElasticSearch: How to search for a value in any fi
- Accessing an array element when returning from a f
- What are the disadvantages of ElasticSearch Doc Va
- Android Studio doesn't display logs by package
- NoNodeAvailableException[None of the configured no
There are a number of reasons why the data inside of Elasticsearch would be much larger than the source data. Generally speaking, Logstash and Lucene are both working to add structure to data that is otherwise relatively unstructured. This carries some overhead.
If you're working with a source of 3 GB and your indexed data is 30 GB, that's a multiple of about 10x over your source data. That's big, but not necessarily unheard of. If you're including the size of replicas in that measurement, then 30 GB could be perfectly reasonable. Based on my own experience and intuition, I might expect something in the 3–5x range relative to source data, depending on the kind of data, and the storage and analysis settings you're using in Elasticsearch.
Here are four different settings you can experiment with when trying to slim down an Elasticsearch index.
The
_source
FieldElasticsearch keeps a copy of the raw original JSON of each incoming document. It's useful if you ever want to reconstruct the original contents of your index, or for match highlighting in your search results, but it definitely adds up. You may want to create an index template which disables the
_source
field in your index mappings.Disabling the
_source
field may be the single biggest improvement in disk usage.Documentation: Elasticsearch _source field
Individual stored fields
Similarly but separately to the
_source
field, you can control whether to store the values of a field on a per-field basis. Pretty straightforward, and mentioned a few times in the Mapping documentation for core types.If you want a very small index, then you should only store the bare minimum fields that you need returned in your search responses. That could be as little as just the document ID to correlate with a primary data store.
Documentation: Elasticsearch mappings for core types
The
_all
FieldSometimes you want to find documents that match a given term, and you don't really care which field that term occurs in. For that case, Elasticsearch has a special
_all
field, into which it shoves all the terms in all the fields in your documents.It's convenient, but if your searches are fairly well targeted to specific fields, and you're not trying to loosely match anything/everything anywhere in your index, then you can get away with not using the
_all
field.Documentation: Elasticsearch _all field
Analysis in general
This is back to the subject of Lucene adding structure to your otherwise unstructured data. Any fields which you intend to search against will need to be analyzed. This is the process of breaking a blob of unstructured text into tokens, and analyzing each token to normalize it or expand it into many forms. These tokens are inserted into a dictionary, and mappings between the terms and the documents (and fields) they appear in are also maintained.
This all takes space, and for some fields, you may not care to analyze them. Skipping analysis also saves some CPU time when indexing. Some kinds of analysis can really inflate your total terms, like using an n-gram analyzer with liberal settings, which breaks down your original terms into many smaller ones.
Documentation: Introduction to Analysis and Analyzers
More reading
As the previous commenter explaining in detail, there are many reasons why the size of log data after indexing into Elasticsearch could increase in size. The blog post he linked to is now dead because I killed my personal blog but it now lives on the elastic.co website: https://www.elastic.co/blog/elasticsearch-storage-the-true-story.