Filebeat nginx module not working

Hi Team,

My Log pipeline is as follows

Filebeat_7.6.2-----> Logstash_7.6.2-------->Redis------>Elastic_cloud

I have enabled filebeat nginx module and have configured the nginx log paths (access and error)

filebeat modules enable nginx

My logstash configuration is as follows.

input {
  redis {
    data_type => "list"
    key => "filebeat"
    host => "pdsdod-elk-rdsjedis.asdsdsd.0001.euw1.cache.amazonaws.com"
    port => 6379
    db => 1
  }
}
filter {
  if [fileset][module] == "nginx" {
    if [fileset][name] == "access" {
      grok {
        match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"%{DATA:[nginx][access][agent]}\""] }
        remove_field => "message"
       }
      mutate {
        add_field => { "read_timestamp" => "%{@timestamp}" }
      }
      date {
        match => [ "[nginx][access][time]", "dd/MMM/YYYY:H:m:s Z" ]
       remove_field => "[nginx][access][time]"
      }
      useragent {
        source => "[nginx][access][agent]"
        target => "[nginx][access][user_agent]"
        remove_field => "[nginx][access][agent]"
      }
      geoip {
        source => "[nginx][access][remote_ip]"
        target => "[nginx][access][geoip]"
      }
    }
    else if [fileset][name] == "error" {
      grok {
        match => { "message" => ["%{DATA:[nginx][error][time]} \[%{DATA:[nginx][error][level]}\] %{NUMBER:[nginx][error][pid]}#%{NUMBER:[nginx][error][tid]}: (\*%{NUMBER:[nginx][error][connection_id]} )?%{GREEDYDATA:[nginx][error][message]}"] }
        remove_field => "message"
      }
      mutate {
        rename => { "@timestamp" => "read_timestamp" }
      }
      date {
        match => [ "[nginx][error][time]", "YYYY/MM/dd H:m:s" ]
        remove_field => "[nginx][error][time]"
      }
    }
  }
}
output {
    elasticsearch {
        hosts => ["https://93xxxxxxxxxxxxxxxxxxx263122.us-east-1.aws.found.io:9243/"]
        user => "elastic"
        password => "xxxxxxxxx"
        index => "filebeat-%{+YYYY.MM.dd}"
        manage_template => false
    }
  }

So as per this logstash filter which parses Nginx logs we need to have a field fieldnamed [fileset] and [nginx] which is missing in the logs.

Entire log is clumsed under the message field and dashboards are just blank.

Please shed some light on how to fix

Regards
Karthik.K

Hi Admins,

Any one please help me.

Regards
Karthik.K

Could you please share your Filebeat configuration formatted using </> and possibly debug logs?

Hi Noemi,

Here is the filebeat config.

#=========================== Filebeat inputs ==================================
filebeat.inputs:
- type: log
  processors:
  - if:
      equals:
        log.file.path: "/var/log/nginx/access.log"
    then: 
      - add_tags:
          tags: [nginx_access]
  - if:
      equals:
        log.file.path: "/var/log/nginx/error.log"
    then: 
      - add_tags:
          tags: [nginx_error]

  enabled: true
  paths:
    - /var/log/nginx/access.log
    - /var/log/nginx/error.log

#============================= Filebeat modules ===============================
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml	# Glob pattern for configuration loading
  reload.enabled: true		# Set to true to enable config reloading
  reload.period: 10s		# Period on which files under path should be checked for changes
#==================== Elasticsearch template setting ==========================
setup.template:
  enabled: true
  overwrite: false
  name: "filebeat"
  pattern: "filebeat-*"
  fields: "/etc/filebeat/fields.yml"
setup.template.settings:
  index.number_of_shards: 1
#================================ General =====================================
tags: ["mni-prod"]
#============================== Dashboards =====================================
setup.dashboards.enabled: true
setup.dashboards.index: "filebeat-*"
#============================== Kibana =========================================
setup.ilm.enabled: false
setup.kibana:
  space.id: "xxxxx-com"
  host: "https://5719exxxxxxxxxxx778bd8.us-east-1.aws.found.io:9243"
  username: "elastic"
  password: "xxxxxxxxxxxx"
#================================ Outputs =====================================
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  enabled: false
  index: "filebeat-%{+yyyy-MM-dd}"
  hosts: ["https://93xxxxxxxxxxxxxxxxx2.us-east-1.aws.found.io:9243/"]
  username: "elastic"
  password: "xxxxxxxxxx"
#--------------------------Redis-------------------------------------------------
output.redis:
  enabled: true
  hosts: ["piop-elk-re.0001.euw1.cache.amazonaws.com:6379"]
  key: filebeat
  db: 0
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

Here is my logstash config to parse the nginx logs

input {
  redis {
    id => "filebeat_in_pipe"
    data_type => "list"
    key => "filebeat"
    host => "pxxxxxxxxxxxxxxxe.amazonaws.com"
    port => 6379
    db => 0
  }
}
filter {
  if "nginx_access" in [tags] {
    grok {
      match => { "message" => "%{IPORHOST:clientip} (?:-|(%{WORD}.%{WORD})) %{USER:ident} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:useragent} %{QS:real_ip}" }
      remove_field => "message"
    }
    mutate {
      add_field => { "read_timestamp" => "%{@timestamp}" }
      remove_tag => [ "" ]
    }
    date {
      match => [ "timestamp", "dd/MMM/YYYY:H:m:s Z" ]
      remove_field => "[nginx][access][time]"
    }
    useragent {
      source => "useragent"
      target => "[nginx][access][user_agent]"
      remove_field => ["useragent", "[browser][build]", "[browser][major]", "[browser][minor]", "[browser][os_minor]", "[browser][patch]", "[browser][os]"]
    }
    geoip {
      #database => "/usr/share/logstash/enhancers/maxmind/"
      source => "real_ip"
      target => "[nginx][access][geoip]"
    }
  }
  else if "nginx_error" in [tags] {
    grok {
      match => { "message" => ["%{DATA:[nginx][error][time]} \[%{DATA:[nginx][error][level]}\] %{NUMBER:[nginx][error][pid]}#%{NUMBER:[nginx][error][tid]}: (\*%{NUMBER:[nginx][error][connection_id]} )?%{GREEDYDATA:[nginx][error][message]}"] }
      remove_field => "message"
    }
    mutate {
      rename => { "@timestamp" => "read_timestamp" }
    }
    date {
      match => [ "[nginx][error][time]", "YYYY/MM/dd H:m:s" ]
      remove_field => "[nginx][error][time]"
    }
  }
}
output {
  redis {
    id => "filebeat_out_pipe"
    data_type => "list"
    key	=> "filebeat"
    host => "pxxxxxxxxxxxxxxxxxx1.euw1.cache.amazonaws.com"
    port => 6379
    db => 1
  }

}

I have another config which writes from redis to elastic

input {
  redis {
    id => "filebeat_in_pipe"
    data_type => "list"
    key => "filebeat"
    host => "pxxxxxxxxxxxxxxx1.cache.amazonaws.com"
    port => 6379
    db => 1
  }
}
output {
  elasticsearch {
    hosts => ["https://9xxxxxxxxxxxxxxxxxxxxxxxxxxx.us-east-1.aws.found.io:9243/"]
    user => "elastic"
    password => "xxxxxxxxxxxxxx"
    index => "filebeat-%{+YYYY.MM.dd}"
    manage_template => false
  }
stdout {}
}

Now logs are being piped to elastic. But the dashboard is showing nothing. (No errors/No data while accessing)

Admins, any idea what went wrong here ?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.