Hi @scottdfedorov,
Elasticsearch will not auto-detect ECS fields based on their name. Proper template(s) or mapping should be in place at all times, to ensure the correct data types are used.
I was under the impression from the docs that simply using a field named like fields in ECS, without other explicit mapping, would result in the fields being mapped as defined in ECS automatically.
If you're able to point me to the place that gave you this impression in the ECS docs, I would like to adjust this and clarify that section.
Now the proper way to send Filebeat events via Logstash depends on what you're doing.
1- If you're using Filebeat modules, you should follow the Filebeat documentation for this. The Filebeat modules come with extensive and precise templates that must be used, to ensure the correct data types are in place for all fields, and this is true even if you're sending events via Logstash first.
2- If you're using Filebeat to tail custom logs and you're doing all of the parsing yourself, you should still (IMO) send this to a Filebeat index with the proper templates in place, as many metadata fields (e.g. agent.*
, host.*
) added by the metadata processors need the proper template for these fields.
As you've noted, the multiple template support is a very good way to specify how your custom fields should be mapped, in addition to the template provided by Filebeat. I think it's the simplest, so
To loop back on point 2 above, if you're doing custom logs only, and you really don't want to use the Filebeat template & index, you may try your hand starting from the sample ECS template we provide here: https://github.com/elastic/ecs/tree/master/generated/elasticsearch. This will lead to a lot of trial and error and maintenance of that custom template, however, so this should be considered as a last resort.