But it is an issue for the ingest pipeline.
Filebeat modules uses ingest pipelines in Elasticsearch, so it is expected that the original message collected will be sent directly to Elasticsearch.
When you add Logstash between filebeat and elasticsearch the original message can change and this can break the ingest pipeline in multiple points.
For example, in the cloudtrail ingest pipeline, you have this processor in the beginning:
- rename:
field: "message"
target_field: "event.original"
If the message arriving to elasticsearch already has a field named event.original
, the pipeline will fail here and further processors will not be executed.
Logstash 8+ per default will add an event.original
field, so it will probably break a lot of ingest pipelines, so you need to remove it as mentioned on the previous answer.
One thing is, why are you using Logstash? You can't change the original message, so Logstash will just act as a proxy to Elasticsearch in this case, it would be better to just send it directly to Elasticsearch.
Also, if you are just starting to collect logs with Elastic Stack I would recommend that you look into using the Elastic Agent and Fleet.
Filebeat modules are not being kept up to date and will probably be deprecated in the future, for example the cloudtrail ingest pipeline in the elastic agent will not fail if the event.original
field already exists.