NGINX Logs not coming through while Syslogs are

Just wanting some clarification on what could be causing this issue.
I have Syslogs coming through just fine, but what it comes to the Nginx Logs, nothing is coming through to Elasticsearch.
I believe that it might have to do with the order of my pipelines, but not entirely sure.
Elasticsearch, Kibana, and Logstash are v6.3.1
Filebeat is v1.2.3
My current syslogs are being parsed correctly through 10-syslog-filter, but nothing nginx related is coming through 11-nginx-filter. Granted, I don't have the filter section set, but i'm not even seeing the index appear through Elasticsearch.

Filebeat.yml

    -
filebeat:
  prospectors:
    -
      paths:
        - /var/log/auth.log
        - /var/log/syslog
        - /opt/rails/farad/current/log/*.log
      #  - /var/log/*.log
      document_type: syslog

      paths:
        - /var/log/nginx/access.log
      fields:
        nginx: true
      fields_under_root: true
      document_type: nginx
      

      input_type: log
      

  registry_file: /var/lib/filebeat/registry

output:
  logstash:
    hosts: ["hostname:5044"]
    bulk_max_size: 1024

    tls:
      certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

shipper:

logging:
  files:
    rotateeverybytes: 10485760 # = 10MB

10-syslog-filter.conf

filter {
      if [type] == "syslog" {
          mutate {
         gsub => ["message",".auth_user_code.=>.\d+.", "auth_user_code=XXXX"]
       }
        grok {
          match => { "message" => "%{SYSLOG5424SD:Time}%{SYSLOG5424SD:Application}%{SYSLOG5424SD:$
          remove_field => [ "RemoveMe1" ]
          remove_field => [ "RemoveMe2" ]
          remove_field => [ "RemoveMe3" ]
          add_field => [ "received_at", "%{@timestamp}" ]
          add_field => [ "received_from", "%{host}" ]
        }
        syslog_pri { }
        date {
          match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
        }
      }
    }

11-nginx-filter.conf

input {
      beats {
        port => 5044
        host => ["hostname:5044"]
      }
    }
    output {
      elasticsearch {
        hosts => localhost
        manage_template => false
        index => "NGINX-%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
      }
    }

I would expect the layout of the filebeat.yml to look like more like this

filebeat:
  prospectors:
    -type: log
      paths: /first/path

    -type: log
      paths: /second/path

But this might be a version thing.

Still having the same issue.
Now, I did use the ./filebeat -e -c filebeat.yml -d "publish" command and saw my nginx access logs being mentioned, but they will never appear in Kibana/ElasticSearch. I even edited the 10-syslog-filter.conf file so that it would also look for the type nginx and to do a match, but it's still not doing anything.. To make things a little bit different, I did update to a newer version of Filebeat. No longer using 1.2.3 on this machine, but instead using 5.4.3.

Whoops, found out that the NGINX logs were communicating over a different Index..
Although syslogs and nginx logs are present on this index. I'm a little confused, but at least I was able to find where they were going..

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.