Filebeat json to Elasticsearch, error processing pipeline

Hi,

I'm sending a json log file with filebeat to an Elasticsearch cluster. When I try to parse the ip field with geoip filter it shows the following error in the ingestError field:

field [client] not present as part of path [client.ip]

This is an example log line:

{"client.ip":"8.8.8.8","email.from":"user@email.com"}

This is the input defined on filebeat:

- type: log
  paths:
  - /var/log/myservice.log
  encoding: plain
  ignore_older: 24h
  pipeline: mypipeline
  index: myindex-write
  json:
    add_error_key: true
    keys_under_root: true

And finally this is the processor on the pipeline:

{
      "geoip": {
        "field": "client.ip",
        "properties": [
          "country_iso_code"
        ],
        "ignore_failure": true
      }
    }

It's strange because the same configuration it works with regular logs, but not with raw json logs. With a regular log If I grok the field "client.ip" in the same pipeline, then it worked fine. Maybe it's something related with the way that filebeat sends the json message?

Somebody knows a way to parse client.ip field on a ingest pipeline coming from a json event?

Perhaps take a look at this

https://www.elastic.co/guide/en/elasticsearch/reference/current/dot-expand-processor.html

"client.ip" is not a "valid" name / json construction with respect to elasticsearch.

{"client.ip":"8.8.8.8","email.from":"user@email.com"}

valid json should look like this

{"client" : {"ip":"8.8.8.8"} ,"email" : {"from":"user@email.com"}}

the grok in regular logs is creating the correct json.

So you might need to use the dot expander I referenced above

EDIT : This can seem a bit confusing because after you create valid json you can reference a field like client.ip but that is not the correct way to create it from a json document

That's exactly what I need. Thanks a lot for the explanation :smiley:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.