Does Logstash has a limit size for each event-mess

2019-02-18 09:14发布

问题:

I am implementing a monitoring tool on servers of my company service. To do that, I am using logstash. Our applications send their logs via a log4net udp appender to logstash (input udp) and then logstash grok them, and send them to elasticsearch. When I display my logs in kibana, I see that some logs are truncated, the last main part is missing (for big logs). So my question is, does Logstash has a size limit for each message-event received. If yes, is it possible to increase the size. I need all my logs and none of them truncated.

回答1:

For the udp case, I think that I have found the solution : -increase the buffer_size parameter in udp.rb file.

I cannot test it now, but I will tell you if it works.



回答2:

I have test it with Logstash 1.4.0 and Logstash 1.3.3. I found that the maximum size of an event is 4095!

So, If your logs have larger than this size, maybe you have to split it to multiple event at the time you send the logs to logstash.



回答3:

Logstash's property buffer_size is by default set to 8192. That's why messages sent over UDP to Logstash are truncated at 8192th symbol.

Try increasing UDP buffer_size in Logstash.

References:

  • https://github.com/elastic/logstash/issues/1505
  • https://github.com/elastic/logstash/issues/2111
  • http://logstash.net/docs/1.4.2/inputs/udp#buffer_size (may be outdated, check documentation version)


标签: logstash