Hello,
filebeat does not recognize the date format I have in my logs (issue opened on github):
"2019-03-16T12:15:58.420454+0000"
so I tried to specify the format using a template.json file:
{
"mappings": {
"doc": {
"properties": {
"@timestamp": { "type": "date", "format": "yyyy-MM-dd'T'HH:mm:ss.SSSSSSZ" },
"_@timestamp": { "type": "date", "format": "yyyy-MM-dd'T'HH:mm:ss.SSSSSSZ" },
"layer": { "type": "keyword" },
"ip_addr": { "type": "ip" },
"string": { "type": "text" },
"service": { "type": "keyword" },
"parent_span_id": { "index": "false", "type": "long" },
"trace_type": { "type": "keyword" },
"trace_id": { "type": "long" },
"label": { "type": "keyword" },
"ip_port": { "type": "long" },
"instance": { "type": "keyword" },
"host": {
"properties": {
"host": { "ignore_above": 1024, "type": "keyword" }
}
},
"num": { "type": "keyword" },
"end_time": { "type": "double" },
"key": { "type": "keyword" },
"error": { "type": "boolean" },
"cancelled": { "type": "boolean" },
"path": { "type": "text" },
"span_id": { "index": "false", "type": "long" },
"start_time": { "type": "double" },
"op": { "type": "keyword" },
"duration_ms": { "type": "long" }
}
}
},
"template": "app-traces-*",
"settings": { "index.refresh_interval": "30s" }
}
but this setting does nothing as I still have the error during a log parsing:
2019-03-16T12:16:02.979Z ERROR jsontransform/jsonhelper.go:53 JSON: Won't overwrite @timestamp because of parsing error: parsing time "2019-03-16T12:15:58.420454+0000" as "2006-01-02T15:04:05Z07:00": cannot parse "+0000" as "Z07:00"
I also tried to rename the field from the filebeat.yml file:
filebeat.prospectors:
- fields.document_type: doc
fields_under_root: true
json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: false
input_type: log
paths:
- /var/log/app-traces/trace-*.log
processors:
- rename:
fields:
- from: "@timestamp"
to: "_@timestamp"
output.elasticsearch:
hosts:
- 10.200.3.221
- 10.200.3.220
- 10.200.3.187
- 10.200.1.76
- 10.200.1.251
- 10.200.1.89
index: app-traces-%{+yyyy.MM.dd}
setup.template.enabled: true
setup.template.json.enabled: true
setup.template.json.name: app-traces
setup.template.json.path: /usr/share/app-tracer-tools/traces_mapping_template.json
setup.template.name: app-traces
setup.template.pattern: app-traces*
setup.template.fields: /etc/filebeat/fields.yml
processors:
- rename:
fields:
- from: "_@timestamp"
to: "@timestamp"
Here is an example of the log file I have to parse:
{"host":"s3-ssl-conn-0.localdomain","service":"sfused","instance":"unconfigured","pid":31737,"trace_type":"op","trace_id":1452107967111228,"span_id":8505715073326365,"parent_span_id":210198511458314,"@timestamp":"2019-03-16T12:20:46.699229+0000","start_time":1552738846699.229,"end_time":1552738846705.233,"duration_ms":6.003906,"op":"service","layer":"workers_arc_sub","error":false,"cancelled":false,"tid":32505}
Problem persists.
The only way to make it work, is to disable the "json.keys_under_root" setting, but in that case, it prefixes all the fields with "json." (like json.@timestamp).
As I do not control the tools which use the data indexed in elasticsearch, I cannot change the field names ingested.
Is there something I can do?