Parse nginx ingress logs in fluentd

2019-04-08 18:03发布

I'd like to parse ingress nginx logs using fluentd in Kubernetes. That was quite easy in Logstash, but I'm confused regarding fluentd syntax.

Right now I have the following rules:

<source>
  type tail
  path /var/log/containers/*.log
  pos_file /var/log/es-containers.log.pos
  time_format %Y-%m-%dT%H:%M:%S.%NZ
  tag kubernetes.*
  format json
  read_from_head true
  keep_time_key true
</source>

<filter kubernetes.**>
  type kubernetes_metadata
</filter>

And as a result I get this log but it is unparsed:

127.0.0.1 - [127.0.0.1] - user [27/Sep/2016:18:35:23 +0000] "POST /elasticsearch/_msearch?timeout=0&ignore_unavailable=true&preference=1475000747571 HTTP/2.0" 200 37593 "http://localhost/app/kibana" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Centos Chromium/52.0.2743.116 Chrome/52.0.2743.116 Safari/537.36" 951 0.408 10.64.92.20:5601 37377 0.407 200

I'd like to apply filter rules to be able to search by IP address, HTTP method, etc in Kibana. How can I implement that?

4条回答
叛逆
2楼-- · 2019-04-08 18:37
<match fluent.**>
  @type null
</match>

<source>
 @type tail
 path /var/log/containers/nginx*.log
 pos_file /data/fluentd/pos/fluentd-nginxlog1.log.pos
 tag nginxlogs
 format none
 read_from_head true
</source>

<filter nginxlogs>
  @type parser
  format json
  key_name message
</filter>


<filter nginxlogs>
  @type parser
  format /^(?<host>[^ ]*) (?<domain>[^ ]*) \[(?<x_forwarded_for>[^\]]*)\] (?<server_port>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+[^\"])(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*) "(?<referer>[^\"]*)" "(?<agent>[^\"]*)" (?<request_length>[^ ]*) (?<request_time>[^ ]*) (?:\[(?<proxy_upstream_name>[^\]]*)\] )?(?<upstream_addr>[^ ]*) (?<upstream_response_length>[^ ]*) (?<upstream_response_time>[^ ]*) (?<upstream_status>[^ ]*) \w*$/
  time_format %d/%b/%Y:%H:%M:%S %z
  key_name log
#  types server_port:integer,code:integer,size:integer,request_length:integer,request_time:float,upstream_response_length:integer,upstream_response_time:float,upstream_status:integer
</filter>

<match nginxlogs>
  @type stdout
</match>
查看更多
走好不送
3楼-- · 2019-04-08 18:48

Pipelines are quite different in logstash and fluentd. And it took some time to build working Kubernetes -> Fluentd -> Elasticsearch -> Kibana solution.

Short answer to my question is to install fluent-plugin-parser plugin (I wonder why it doesn't ship within standard package) and put this rule after kubernetes_metadata filter:

<filter kubernetes.var.log.containers.nginx-ingress-controller-**.log>
  type parser
  format /^(?<host>[^ ]*) (?<domain>[^ ]*) \[(?<x_forwarded_for>[^\]]*)\] (?<server_port>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+[^\"])(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")? (?<request_length>[^ ]*) (?<request_time>[^ ]*) (?:\[(?<proxy_upstream_name>[^\]]*)\] )?(?<upstream_addr>[^ ]*) (?<upstream_response_length>[^ ]*) (?<upstream_response_time>[^ ]*) (?<upstream_status>[^ ]*)$/
  time_format %d/%b/%Y:%H:%M:%S %z
  key_name log
  types server_port:integer,code:integer,size:integer,request_length:integer,request_time:float,upstream_response_length:integer,upstream_response_time:float,upstream_status:integer
  reserve_data yes
</filter>

Long answer with lots of examples is here: https://github.com/kayrus/elk-kubernetes/

查看更多
女痞
4楼-- · 2019-04-08 18:51

Because, you use json format for parsing. Try this. http://docs.fluentd.org/articles/recipe-nginx-to-elasticsearch

If you use custom format, you might need to write your own regex. http://docs.fluentd.org/articles/in_tail

查看更多
Ridiculous、
5楼-- · 2019-04-08 19:00

You can use multi-format-parser plugin, https://github.com/repeatedly/fluent-plugin-multi-format-parser

 <match>
   format multi_format
   <pattern>
     format json
   </pattern>
   <pattern>
     format regexp...
     time_key timestamp
   </pattern>
   <pattern>
     format none
   </pattern>
 </match>

Note: I'm curious to what was the final conf looks like including the filter parser.

查看更多
登录 后发表回答