Error key populated even when explicitly disabled

Hello,

Using filebeat 6.6

I have a "error" field used in my template, it has to come from the log file.
so, in the configuration file, I disabled the error key with the following parameter:
json.add_error_key: false

But I continue to see a parsing error in the message processed:

2019-03-15T20:01:25.362Z	WARN	elasticsearch/client.go:523	Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xbf1b1f04fa851850, ext:10376128117, loc:(*time.Location)(0x21bf500)}, Meta:common.MapStr(nil), Fields:common.MapStr{"source":"/var/log/app-traces/trace-sfused-2019-03-15_20h00-31737.log", "start_time":1.552680080228593e+12, "beat":common.MapStr{"name":"s3-ssl-conn-0.localdomain", "hostname":"s3-ssl-conn-0.localdomain", "version":"6.6.2"}, "offset":760102, "num":"10206284213025646938", "error":common.MapStr{"type":"json", "message":"@timestamp not overwritten (parse error on 2019-03-15T20:01:20.228593+0000)"}, "pid":31737, "_host":"s3-ssl-conn-0.localdomain", "document_type":"doc", "trace_type":"ann_int", "instance":"unconfigured", "service":"sfused", "host":common.MapStr{"name":"s3-ssl-conn-0.localdomain"}, "span_id":3683721572698185, "trace_id":3452903457391402, "parent_span_id":905145127718955, "label":"ino", "log":common.MapStr{"file":common.MapStr{"path":"/var/log/app-traces/trace-sfused-2019-03-15_20h00-31737.log"}}}, Private:file.State{Id:"", Finished:false, Fileinfo:(*os.fileStat)(0xc420322750), Source:"/var/log/app-traces/trace-sfused-2019-03-15_20h00-31737.log", Offset:760428, Timestamp:time.Time{wall:0xbf1b1f04e6068db1, ext:10032290801, loc:(*time.Location)(0x21bf500)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x1461cba, Device:0xfd01}}}, Flags:0x1} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse field [error] of type [boolean]","caused_by":{"type":"i_o_exception","reason":"Current token (START_OBJECT) not of boolean type\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@17ab7e25; line: 1, column: 569]"}}

Did I misunderstood the parameter?

here is my actual configuration:
filebeat.yml

filebeat.prospectors:
- fields.document_type: doc
  fields_under_root: true
  json.keys_under_root: true
  json.overwrite_keys: true
  json.add_error_key: false
  input_type: log
  paths:
  - /var/log/app-traces/trace-*.log
  processors:
    - rename:
        fields:
          - from: host
            to: _host
output.elasticsearch:
  hosts:
  - 10.200.3.221
  - 10.200.3.220
  - 10.200.3.187
  - 10.200.1.76
  - 10.200.1.251
  - 10.200.1.89
  index: app-traces-%{+yyyy.MM.dd}
setup.template.enabled: true
setup.template.json.enabled: true
setup.template.json.name: app-traces
setup.template.json.path: /usr/share/app-tracer-tools/traces_mapping_template.json
setup.template.name: app-traces
setup.template.pattern: app-traces*
setup.template.fields: /etc/filebeat/fields.yml
processors:
- rename:
    fields:
      - from: _host
        to: host

template used:

{
    "mappings": {
        "doc": {
            "properties": {
                "layer": {
                    "type": "keyword"
                }, 
                "ip_addr": {
                    "type": "ip"
                }, 
                "string": {
                    "type": "text"
                }, 
                "service": {
                    "type": "keyword"
                }, 
                "@timestamp": {
                    "type": "date"
                }, 
                "parent_span_id": {
                    "index": "false", 
                    "type": "long"
                }, 
                "trace_type": {
                    "type": "keyword"
                }, 
                "trace_id": {
                    "type": "long"
                }, 
                "label": {
                    "type": "keyword"
                }, 
                "ip_port": {
                    "type": "long"
                }, 
                "instance": {
                    "type": "keyword"
                }, 
                "host": {
                    "type": "keyword"
                }, 
                "num": {
                    "type": "keyword"
                }, 
                "end_time": {
                    "type": "double"
                }, 
                "key": {
                    "type": "keyword"
                }, 
                "error": {
                    "type": "boolean"
                }, 
                "cancelled": {
                    "type": "boolean"
                }, 
                "path": {
                    "type": "text"
                }, 
                "span_id": {
                    "index": "false", 
                    "type": "long"
                }, 
                "start_time": {
                    "type": "double"
                }, 
                "op": {
                    "type": "keyword"
                }
            }
        }
    }, 
    "template": "scality-traces-*", 
    "settings": {
        "index.refresh_interval": "30s"
    }
}

The json.* options in filebeat.yml are meant for use if your source log messages are structured as JSON. Is this the case with the log files under /var/log/app-traces/trace-*.log?

setup.template.json.enabled: true
setup.template.json.name: app-traces
setup.template.json.path: /usr/share/app-tracer-tools/traces_mapping_template.json

What are these settings? AFAIK, there are no setup.template.json.* settings. Perhaps you meant to use setup.template.* (i.e. no json in the settings names) instead? Note that there is no setup.template.path setting either.

Finally, instead of specifying your template in the Elasticsearch template JSON format, you will need to specify it in the Filebeat fields.yml format. Look at the default fields.yml provided by Filebeat in /etc/filebeat/fields.yml, and create a new fields.yml using the same structure for your custom fields. Then use the setup.template.fields setting to point to your custom fields.yml file. Filebeat will use it to generate the Elasticsearch template and load it into Elasticsearch for you once you start up Filebeat.

If you need more help around setting up a custom template, see https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-template.html.

Hello Shaunak,

yes the source log file contains json lines, like:

{"host":"s3-ssl-conn-0.localdomain","service":"sfused","instance":"unconfigured","pid":19183,"trace_type":"op","trace_id":4941674888193761,"span_id":2558621125371266,"parent_span_id":5120726858687264,"@timestamp":"2019-03-18T19:22:49.256357+0000","start_time":1552936969256.357,"end_time":1552936969258.446,"duration_ms":2.089111,"op":"service","layer":"workers_chord","error":false,"cancelled":false,"tid":19240}

concerning the setup.template.json.* settings, they are well documented in current version (6.6), see the bottom of
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-template.html

I don't see the interest to rewrite the json template, to a yaml fields file.
especially if the fields.yml file is used to generate an Elasticsearch templae (in json).
is there a real change?

Gah, my mistake! I was looking at the other document on template setup, which doesn't mention the setup.template.json.* fields.

I don't see the interest to rewrite the json template, to a yaml fields file.
especially if the fields.yml file is used to generate an Elasticsearch templae (in json).
is there a real change?

Agreed. You should be able to just continue using your JSON template.

What do you get if you call the following Elasticsearch API?

GET _template/*app-traces*

note that I may change between my last post as I try a lot of things to solve another issue.

currently It returns that:

# curl -XGET -s 10.200.3.221:9200/_template/app-traces*?pretty
{
  "app-traces" : {
    "order" : 0,
    "index_patterns" : [
      "app-traces-*"
    ],
    "settings" : {
      "index" : {
        "refresh_interval" : "30s"
      }
    },
    "mappings" : {
      "doc" : {
        "properties" : {
          "trace_id" : {
            "type" : "long"
          },
          "end_time" : {
            "type" : "double"
          },
          "start_time" : {
            "type" : "double"
          },
          "trtimestamp" : {
            "type" : "date"
          },
          "string" : {
            "type" : "text"
          },
          "host" : {
            "properties" : {
              "host" : {
                "type" : "keyword"
              }
            }
          },
          "num" : {
            "type" : "keyword"
          },
          "key" : {
            "type" : "keyword"
          },
          "span_id" : {
            "index" : "false",
            "type" : "long"
          },
          "op" : {
            "type" : "keyword"
          },
          "label" : {
            "type" : "keyword"
          },
          "instance" : {
            "type" : "keyword"
          },
          "service" : {
            "type" : "keyword"
          },
          "parent_span_id" : {
            "type" : "long",
            "index" : "false"
          },
          "ip_port" : {
            "type" : "long"
          },
          "layer" : {
            "type" : "keyword"
          },
          "ip_addr" : {
            "type" : "ip"
          },
          "error" : {
            "type" : "boolean"
          },
          "cancelled" : {
            "type" : "boolean"
          },
          "path" : {
            "type" : "text"
          },
          "duration_ms" : {
            "type" : "long"
          },
          "@timestamp" : {
            "type" : "date"
          },
          "trace_type" : {
            "type" : "keyword"
          }
        }
      }
    },
    "aliases" : { }
  }
}

Hi, I've been able to reproduce your issue. It looks like you've indeed found a bug about the json.add_error_key setting not being honored. I filed the bug on your behalf over here: https://github.com/elastic/beats/issues/11298. Thanks for finding it!

1 Like

it's super nice you reproduced and opened an issue.
thank you very much.

now I only have to solve the real error :wink:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.