Filebeat Nginx module

Hello,

I I have a filebeat who send nginx logs to logstash with this conf :

###########################################
# HEARTBEAT.YML
# Generated by Ansible
###########################################

filebeat.prospectors:
- input_type: log
  paths:
    - /var/log/syslog

filebeat.modules:
- module: nginx
  access:
    var.paths:
      - /var/log/nginx/*_access.log
  error:
    - /var/log/nginx/*_error.log

output.logstash:
  hosts:
    - [host]:5044

In my Logstash configuration, I have this :

input {
  beats {
    port => 5044
  }
}

filter {
    if [source] =~ "_access.log" {
  grok {
    match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"%{DATA:[nginx][access][agent]}\""] }
    remove_field => "message"
    tag_on_failure => ["nginx_access_log"]
  }
  mutate {
    rename => { "@timestamp" => "read_timestamp" }
  }
  date {
    match => [ "[nginx][access][time]", "dd/MMM/YYYY:H:m:s Z" ]
    remove_field => "[nginx][access][time]"
  }
  useragent {
    source => "[nginx][access][agent]"
    target => "[nginx][access][user_agent]"
    remove_field => "[nginx][access][agent]"
  }
  geoip {
    source => "[nginx][access][remote_ip]"
    target => "[nginx][access][geoip]"
  }
}
else if [source] =~ "_error.log" {
  grok {
    match => { "message" => ["%{DATA:[nginx][error][time]} \[%{DATA:[nginx][error][level]}\] %{NUMBER:[nginx][error][pid]}#%{NUMBER:[nginx][error][tid]}: (\*%{NUMBER:[nginx][error][connection_id]} )?%{GREEDYDATA:[nginx][error][message]}"] }
    remove_field => "message"
    tag_on_failure => ["nginx_error_log"]
  }
  mutate {
    rename => { "@timestamp" => "read_timestamp" }
  }
  date {
    match => [ "[nginx][error][time]", "YYYY/MM/dd H:m:s" ]
    remove_field => "[nginx][error][time]"
  }
  geoip {
    source => "[nginx][error][remote_ip]"
    target => "[nginx][error][geoip]"
  }
}
else {
  grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
    tag_on_failure => ["syslog"]
  }
  date {
    match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
  }
}

}

output {
  elasticsearch {
    hosts => ["[host]:9200"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

Everything was working fine except that I can't create map in visualization because my nginx.access.geoip.location field wasn't in geoip format, so I have done this :

{
  "filebeat": {
    "order": 0,
    "template": "filebeat-*",
    "settings": {
      "index": {
        "number_of_shards": "2",
        "number_of_replicas": "1"
      }
    },
    "mappings": {
      "my_type": {
        "dynamic": "true",
        "properties": {
          "geoip": {
            "dynamic": true,
            "properties": {
              "location": {
                "type": "geo_point"
              }
            }
          }
        }
      }
    },
    "aliases": {}
  }
}

And then my field was correct but, now, I don't have any informations send by filebeat !

Do you have any idea why ?

1 Like

No ideas ?

Did you manually install the Filebeat index template?

The index template has a nginx.access.geoip field where location is a geo_point type.

I see you have a geoip filter declared for the error log, but there is no remote_ip field in the grok pattern for the error log. I think this can be removed.

Looks like you are missing var.paths in the error log config.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.