Impossible to overwrite "host.ip" when using Custom Logs integration

Hi,

I'm trying to get the correct IP for the host.ip property in my logs documents, to be able to utilize all the built in links in kibana but unfortunately I cannot get it to work.

The setup is quite easy. I'm running elastic/kibana v8 and an elastic-agent in a docker container with Custom Logs integration to be able to read log files. These log files uses the following format (truncated):

{
  "timestamp":"2022-02-27T04:41:34.112323Z",
  "host":{
    "name":"app",
    "ip":"192.168.16.6"
  }
}

The custom logs integration has the following custom configuration:

json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true
json.message_key: message

With this configuration I expect json.overwrite_keys: true to make sure that the host.ip that is automatically added by the elastic-agent is overwritten by the host.ip in the log message, but unfortunately that is not the case and the host.ip in the document is still the IP of the elastic-agent container, not what's in the log file.

I've also tried using the drop_fields and rename processors to try and rename host.address to host.ip but it also always fails.

Anyone who knows if it is even possible to overwrite the host.ip when running filebeat/Custom Logs integration? Any tips are greatly appreciated

I've managed to get it to work using an ingest pipeline, but it feels a bit strange when I think the normal configuration should work. This also requires you to use host.address in the log file instead of host.ip and that is something I want to try and avoid.

[
  {
    "remove": {
      "field": "host.ip",
      "ignore_missing": true
    }
  },
  {
    "rename": {
      "field": "host.address",
      "target_field": "host.ip",
      "ignore_missing": true
    }
  }
]

Hi @Twyzz !

I think this is happening because host.ip of Agent is added at a later step most probably by the add_host_metadata processor which is enabled by default. If that's the case I think that a better experience would be for the processor to add host related metadata only if those are not there already. Having said this, please feel free to open Github issue for this so as to have the team look into it.

Thank you!

Hi @ChrsMark

Thanks for the fast reply. I guessed the processor was added automatically (Couldn't find any documentation on this) so I tried to disable it, but it kept adding the metadata.

I'm not entirely sure when the json is decoded in filebeat, but my guess is that it either happens before the processors are executed or by a default decode_json_fields processor. It might be that the processors are executed in a decode_json_fields > add_host_metadata order, and thus add_host_metadata overwrites the data.

I'll play around with it a bit more to see if maybe a manual configuration of the processors (add_host_metadata > decode_json_fields) might work, and I'll open an issue if I feel something is off with the processors.

This is the processor config I used before to try and disable the host processor earlier:

  - add_host_metadata:
      netinfo.enabled: false

Hi again @ChrsMark

I managed to find a configuration that I missed before when digging through the filebeat.yml configuration in the github repo.

The host information is automatically added unless you tag the document with "forwarded" as seen here:

processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded

So the solution is to just add the add_tags processor as seen below:

  - add_tags:
      tags: [forwarded]

The information is then no longer added, and the information from the log file is kept as expected.

Now my current guess is that these processors are automatically executed after all your own processors, since you cannot modify the information added by the add_host_metadata through the rename/drop_fields processors and that is also most likely why the json.overwrite_keys has no effect.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.