Hi,
I'm currently testing the Filebeat 7.3.1 NGINX module for processing access logs. The logs are inserted into ES but not processed via the ingest pipeline. For example fields like source.ip, url.original, http.request.method are missing.
I have already verified there are two pipelines created in Elasticsearch:
- filebeat-7.3.1-nginx-access-default
- filebeat-7.3.1-nginx-error-pipeline
Even when simulating the source document against this pipeline, everything seems to work fine:
POST _ingest/pipeline/filebeat-7.3.1-nginx-access-default/_simulate { "docs" : [ { "_source": { .... "message": "52.57.62.109 - logstash_writer [05/Sep/2019:10:42:35 +0200] \"POST /_bulk HTTP/1.1\" 200 222 \"-\" \"Manticore 0.6.4\"", "@timestamp": "2019-09-05T08:42:36.689Z" ....
Results in something like this:
...
"request" : {
"method" : "POST",
"referrer" : "-"
},
"version" : "1.1",
"response" : {
"body" : {
"bytes" : 222
}
.
But the document that is inserted into ES misses these fields.
I've disabled the filebeat.inputs configuration in filebeat.yml, and enabled the module with the following config:
- module: nginx
# Access logs
access:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
var.paths: ["/var/log/nginx/*-access.log"]
The index is based upon the index template filebeat-7.3.1 which has all the required fields described.
The original log line looks like this:
52.57.62.109 - logstash_writer [05/Sep/2019:10:42:35 +0200] "POST /_bulk HTTP/1.1" 200 222 "-" "Manticore 0.6.4"
It's probably good to mention that i'm sending events via Logstash. So i manually set up the index templates and pipelines by pointing the beat to ES only once.
Am i missing something here?
Tnx!