Thanks for your reply. @stephenb
Yes I started filebeat with setup before running it.
on the ES side on the filebeat index (in the index management) I see index_failed counter increased each time I send an event via filebeat.
I do not see any parsing error for the json object in the filebeat event output and it seems the event is ok.
the filebeat.yml is the default config. I just added ES Cloud ID and user/passes all of the other configs are untouched.
the things that seems to be important is I changed the input section of module to a TCP listener
type: tcp
host: "localhost:9100"
max_message_size: 10MiB
#framing: rfc6587
tags: {{.tags | tojson}}
publisher_pipeline.disable_host: {{ inList .tags "forwarded" }}
processors:
- decode_json_fields:
fields:
- message
target: json
{{ if eq .keep_original_message true }}
- rename:
fields:
- from: message
to: event.original
{{ end }}
- add_fields:
target: ''
fields:
ecs.version: 1.8.0
so I get the json from TCP listener then push it to ES.
and from the point you mentioned I found that event is published from filebeat correctly but when it reaches ES it faces "index failed" without any ingest parsing errors in ingest pipeline.
note: I tested the pipeline with the sample ''publish event" in the ES console (add document) and it worked without any issue and converted the event successfully without any parsing errors. but when the same event goes through filebeat I see index failed count increased. instead of count of documents I mean.