Nginx module - WARN Can not index event (status=400)

Hi, I am trying to use NGINX module. Filebeat, elasticsearch and kibana are runing in version 6.1.0.
According to here under extract from filebeat logs, I am experiencing index issue due to parsing exception ... Any help highly appreciated to know in which direction to investigate. Thanks.

2018-03-13T09:58:52+01:00 DBG [publish] Publish event: {
"@timestamp": "2018-03-13T08:58:52.197Z",
"@metadata": {
"beat": "filebeat",
"type": "doc",
"version": "6.1.0",
"pipeline": "filebeat-6.1.0-nginx-access-default"
},
"source": "/opt/application/nginxstatic/logs/access.log",
"offset": 1199579,
"message": "127.0.0.1 - - - [13/Mar/2018:09:58:51 +0100] "GET /nginx_status?consul HTTP/1.1" 301 178 "-" - "0.000 msec" ",
"fileset": {
"name": "access",
"module": "nginx"
},
"prospector": {
"type": "log"
},
"beat": {
"name": "192.168.2.13",
"hostname": "i-0013d126-rp-static-server-15102233541.novalocal",
"version": "6.1.0"
}
}
2018-03-13T09:58:52+01:00 DBG [harvester] End of file reached: /opt/application/nginxstatic/logs/access.log; Backoff now.
2018-03-13T09:58:52+01:00 DBG [elasticsearch] PublishEvents: 2 events have been published to elasticsearch in 5.590765ms.
2018-03-13T09:58:52+01:00 WARN Can not index event (status=400): {"type":"mapper_parsing_exception","reason":"Failed to parse mapping [doc]: Mapping definition for [error] has unsupported parameters: [properties : {code={type=long}, message={norms=false, type=text}, type={ignore_above=1024, type=keyword}}]","caused_by":{"type":"mapper_parsing_exception","reason":"Mapping definition for [error] has unsupported parameters: [properties : {code={type=long}, message={norms=false, type=text}, type={ignore_above=1024, type=keyword}}]"}}
2018-03-13T09:58:52+01:00 WARN Can not index event (status=400): {"type":"mapper_parsing_exception","reason":"Failed to parse mapping [doc]: Mapping definition for [error] has unsupported parameters: [properties : {code={type=long}, message={norms=false, type=text}, type={ignore_above=1024, type=keyword}}]","caused_by":{"type":"mapper_parsing_exception","reason":"Mapping definition for [error] has unsupported parameters: [properties : {code={type=long}, message={norms=false, type=text}, type={ignore_above=1024, type=keyword}}]"}}

When downgrading Filebeat to 5.6.0, I get the event in Kibana (loved it) but with error message :
error: Provided Grok expressions do not match field value: [127.0.0.1 - - - [13/Mar/2018:11:38:54 +0100] "GET /nginx_status?consul HTTP/1.1" 301 178 "-" - "0.000 msec" ]
I can imagine that the root cause is nginx version 1.6.2 when elastic alerts that testing have been performed from nginx 1.10 ... ?

Hi @ORich,

Can you share your Filebeat settings? I'm wondering if you are using a custom index pattern

Hello exekias, thanks for your swift reply. I guess I do not use custom index pattern but I let you check in my filebeat.yml updated content. For your information, I first updated my elasticsearch & kibana nodes from 5.5.0 to 6.1.0 applying kibana index migration procedure detailled in https://www.elastic.co/guide/en/kibana/6.1/migrating-6.0-index.html
#-------------------------------- Nginx Module -------------------------------

  • module: nginx

    Access logs

    access:
    enabled: true

    Set custom paths for the log files. If left empty,

    Filebeat will choose the paths depending on your OS.

    var.paths: ["/opt/application/nginxstatic/logs/access.log*"]

    Prospector configuration (advanced). Any prospector configuration option

    can be added under this section.

    #prospector:

    Error logs

    error:
    enabled: true

    Set custom paths for the log files. If left empty,

    Filebeat will choose the paths depending on your OS.

    var.paths: ["/opt/application/nginxstatic/logs/error.log*"]

    Prospector configuration (advanced). Any prospector configuration option

    can be added under this section.

    #prospector:

#========================= Filebeat global options ============================

Name of the registry file. If a relative path is used, it is considered relative to the

data path.

filebeat.registry_file: "/opt/application/filebeat/data/registry"

#================================ General ======================================

The name of the shipper that publishes the network data. It can be used to group

all the transactions sent by a single shipper in the web interface.

If this options is not defined, the hostname is used.

name: "192.168.2.13"

#-------------------------- Elasticsearch output -------------------------------
output.elasticsearch:

Boolean flag to enable or disable the output module.

enabled: true

Array of hosts to connect to.

Scheme and port can be left out and will be set to the default (http and 9200)

In case you specify and additional path, the scheme is required: http://localhost:9200/path

IPv6 addresses should always be defined as: https://[2001:db8::1]:9200

hosts: ["elasticsearch.service.webcom:9200"]

Set gzip compression level.

#compression_level: 0

Optional protocol and basic auth credentials.

#protocol: "https"
#username: "elastic"
#password: "changeme"

Dictionary of HTTP parameters to pass within the url with index operations.

#parameters:
#param1: value1
#param2: value2

Number of workers per Elasticsearch host.

#worker: 1

Optional index name. The default is "filebeat" plus date

and generates [filebeat-]YYYY.MM.DD keys.

In case you modify this pattern you must update setup.template.name and setup.template.pattern accordingly.

#index: "filebeat-%{[beat.version]}-%{+yyyy.MM.dd}"

Optional ingest node pipeline. By default no pipeline will be used.

#pipeline: ""

Once nginx upgraded to 1.12.2, I am still facing the error: Provided Grok expressions do not match field value ... really disappointing since my setup is fully based on packaged NGINX module provided by elasticsearch. Nginx log lines look well formated. No idea about the root cause.
Here is the JSON format event from Kibana :

{
"_index": "filebeat-2018.03.14",
"_type": "doc",
"_id": "0gL-JGIBZHTmEduVuWTG",
"_version": 1,
"_score": null,
"_source": {
"@timestamp": "2018-03-14T14:51:31.433Z",
"offset": 271,
"beat": {
"hostname": "i-00140ad7-rp-ws-server-15111690381.novalocal",
"name": "192.168.2.20",
"version": "5.6.0"
},
"input_type": "log",
"source": "/opt/application/nginxws/logs/ws.access.log",
"fileset": {
"module": "nginx",
"name": "access"
},
"message": "192.168.2.4 192.168.2.84:8000 - - [14/Mar/2018:15:51:22 +0100] "GET /_wss/.ws?v=5&ns=accounts HTTP/1.1" 101 1547 "-" - "100.537 msec" ",
"type": "log",
"error": "Provided Grok expressions do not match field value: [192.168.2.4 192.168.2.84:8000 - - [14/Mar/2018:15:51:22 +0100] \"GET /_wss/.ws?v=5&ns=accounts HTTP/1.1\" 101 1547 \"-\" - \"100.537 msec\" ]"
},
"fields": {
"@timestamp": [
"2018-03-14T14:51:31.433Z"
]
},
"highlight": {
"beat.name": [
"@kibana-highlighted-field@192.168.2.20@/kibana-highlighted-field@"
]
},
"sort": [
1521039091433
]
}

I performed a full reinstall : filebeat, kibana, elasticsearch from scratch with verion 6.1.0.
I still have the index error event in Filebeat logfile. On elasticsearch logfile side, it looks like it tries to execute a filebeat pipeline tag as 5.6.0 !!!!

[2018-03-16T11:15:56,861][DEBUG][o.e.a.b.TransportBulkAction] [i-0016e5a0-elk-server-15210429051-MyInstanceES] failed to execute pipeline [filebeat-5.6.0-nginx-access-default] for document [filebeat-2018.03.16/doc/null]
java.lang.IllegalArgumentException: pipeline with id [filebeat-5.6.0-nginx-access-default] does not exist
at org.elasticsearch.ingest.PipelineExecutionService.getPipeline(PipelineExecutionService.java:194) ~[elasticsearch-6.1.0.jar:6.1.0]
at org.elasticsearch.ingest.PipelineExecutionService.access$100(PipelineExecutionService.java:42) ~[elasticsearch-6.1.0.jar:6.1.0]
at org.elasticsearch.ingest.PipelineExecutionService$2.doRun(PipelineExecutionService.java:94) [elasticsearch-6.1.0.jar:6.1.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:637) [elasticsearch-6.1.0.jar:6.1.0]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.1.0.jar:6.1.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_144]

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.