Some logs with more json fields are not processed (Filebeat 8.3.3)

Hi there,

I am using the ECS-Logging for Jaba with filebeat 8.3.3 (Get started | ECS Logging Java Reference [1.x] | Elastic) and i have a strange problem where the log entries are not processed by the filestream of filebeat. When i deactivate the ndjson configuration all logs are processed and i do see all logs in Kibana (in json though). So the issue should be releated with the ndjson configuration. When i add the following log it gets processed correctly:

{"@timestamp":"2022-08-07T11:11:41.644Z", "log.level": "INFO", "message":"Request end: Method:GET URI:/rest/someurl Time:12ms Response:200", "ecs.version": "1.2.0","":"srv-core","event.dataset":"srv-core","":"http-nio-14001-exec-48","log.logger":"com.dualoo.core.config.filter.IncomingRequestFilter","path":"/rest/someurl"}

This one is not processed and i also cant see any error message:

{"@timestamp":"2022-08-07T10:36:41.644Z", "log.level": "INFO", "message":"Request end: Method:GET URI:/rest/someurlTime:12ms Response:200", "ecs.version": "1.2.0","":"srv-core","event.dataset":"srv-core","":"http-nio-14001-exec-48","log.logger":"com.dualoo.core.config.filter.IncomingRequestFilter","path":"/rest/someurl","ip":"some_ip","user":"some_user_id","tenant":"some_tenant_id"}

So it probably has to do with the extra fields in the log entry. But i couldnt find any hint what i can do to ingest these log entries aswell. I also try to change the ndjson settings, but this didnt help. Are these extra fields maybe somehow reserved?


- type: filestream

  id: filestream-srv-core

  enabled: true

    - /var/log/srv-core.log.json
    - ndjson:
        keys_under_root: true
        overwrite_keys: true
        add_error_key: true
        expand_keys: true

Hi @nobeerhere Welcome to the community!

You are close... It is most likely this... in your ndjson


in ECS the user field is an json object with sub fields see here... and so when your ndjson tries to write that field as a "concrete value" into a the "user object" it will fail. In fact you will see that error in the filebeat logs. the user fields mapping (i.e. schema does not match what you are trying to write)

This is the danger / issue about writing directly to the root of the json object.
If you change the keys_under_root: false it should work...what you want to do as a solution there are a couple approaches...

  1. move your fields to not be under root

  2. "ECS Way" Add an ingest pipeline to rename / set the conflicting fields to ECS compliant fields ... example your user field to etc the ingest pipeline gets executed before the data is written if the schema then matches the write will go through.

Hope that helps...

Hey Stephen, it now works as it should. I adjusted the Pipeline accordingly and rename the fields. I actually dont know why i did not see these error logs in the filebeat logs.

Thanks for the hint and have a good day.


1 Like

Glad you got it working!

The error log message can be a bit confusing... they were in there somewhere just search for "concrete" I think.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.