I am implementing a monitoring tool on servers of my company service. To do that, I am using logstash. Our applications send their logs via a log4net udp appender to logstash (input udp) and then logstash grok them, and send them to elasticsearch. When I display my logs in kibana, I see that some logs are truncated, the last main part is missing (for big logs). So my question is, does Logstash has a size limit for each message-event received. If yes, is it possible to increase the size. I need all my logs and none of them truncated.
相关问题
- python-logstash not working
- Outputting UDP From Logstash
- Elasticsearch Unreachable: Connection refused
- Create a new index per day for Elasticsearch in Lo
- Logstash: XML to JSON output from array to string
相关文章
- Duplicate entries into Elastic Search while logsta
- can't parse xml input with logstash filter
- Getting CloudTrail Logs into Logstash
- Logstash not reading file input
- Kibana time delta between two fields
- Why does logstash take so long to start/load?
- how to implement the unit or integration tests for
- logstash with java10 get error : Unrecognized VM o
I have test it with Logstash 1.4.0 and Logstash 1.3.3. I found that the
maximum size of an event is 4095
!So, If your logs have larger than this size, maybe you have to split it to multiple event at the time you send the logs to logstash.
For the udp case, I think that I have found the solution : -increase the buffer_size parameter in udp.rb file.
I cannot test it now, but I will tell you if it works.
Logstash's property buffer_size is by default set to 8192. That's why messages sent over UDP to Logstash are truncated at 8192th symbol.
Try increasing UDP buffer_size in Logstash.
References: