Logstash unable to filter geoip for nginx


(Anthony Cleaves) #1

Now I have asked this before inside kibana forums and the discussions died with no resolution so I will start again.

I have just span up a brand new 5.5 cluster:

  • x2 master nodes (elasticsearch)
  • x2 data nodes (elasticsearch)
  • x1 Logstash node
  • x1 kibana node

This setup is using search guard, so I loaded the template into the elasticmaster like so:

root@x:/etc/elasticsearch/x# curl -u x:x \
 -XPUT 'https://x:9200/_template/filebeat' -d@/etc/filebeat/filebeat.template.json \
 -E elastic-admin.pem \
 --key test.pem \
 --cacert ca-bundle.pem
{"acknowledged":true}

I did this before I had any index loaded into kibana. This template is from a fresh install of filebeat5.5, so it's exactly what the software puts inside

/etc/filebeat/filebeat.template.json

I then added

source => "clientip"

to my logstash config.

filter {
        if [type] == "nginx.access" {
                grok {
                        match => { "message" => "%{HTTPD_COMBINEDLOG}" }
                }
                geoip {
                        source => "clientip"
                }
        }
        if [type] == "nginx.error" {
                grok {
                        match => { "message" => "%{HTTPD20_ERRORLOG}" }
                }
                geoip {
                        source => "clientip"
                }
        }
        if [type] == "nginx" {
                grok {
                        match => { "message" => "%{HTTPD_COMBINEDLOG}" }
                }
                geoip {
			source => "clientip"
		    }
        }
}

I ensured the filebeat config was sane:

 - input_type: log
 -  paths:
     - /var/log/nginx/*.log
 document_type: nginx.access
 tags: ["nginx_logs"]

Now, when I go into kibana to create a virtualisation for the following logs, I get an error.

Here is a log

Here are the new available fields on the left side, for some reason geoip.location is always separated?

Screenshot from 2017-08-09 15-51-42

When I try to visualise this data, I see the following error:

No Compatible Fields: The "filebeat-*" index pattern does not contain any of the following field types: geo_point

Any help would be appreciated.


(Anthony Cleaves) #2

With ES I am installing the ingest-geoip plugin, is this still required with version 5.5?


(Anthony Cleaves) #3

Bump


(Anthony Cleaves) #4

I got to the bottom of this in the end.

The filebeat.template.json file has the following for nginx.

"nginx.access.geoip.location"

Which was not being populated, I changed my filebeat filter to:

    - input_type: log
      paths:
        - /var/log/nginx/access.log
      fields:
        log_type: nginx.access
      tags: nginx.access

And then modified my logstash config to:

filter {
        if [fields][log_type] == "nginx.access" {
                grok {
                        match => { "message" => "%{HTTPD_COMBINEDLOG}" }
                }
                geoip {
                        source => "clientip"
                        target => "[nginx][access][geoip]"
                }
        }
        if [fields][log_type]  == "nginx.error" {
                grok {
                        match => { "message" => "%{HTTPD20_ERRORLOG}" }
                }
                geoip {
                        source => "clientip"
                }
        }
        if [fields][log_type] == "nginx" {
                grok {
                       match => { "message" => "%{HTTPD_COMBINEDLOG}" }
               }
                geoip {
                        source => "clientip"
                }
        }
}

And I now have a plot on my graph.


(system) #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.