Cannot index event when using output pipeline definition

Hi,

I am using filebeat/elasticsearch/kibana 7.10.0.
My filebeat runs on Kubernetes.

When I activate my ingest pipeline in my filebeat output config it runs into errors on client side. But if I test one event in my pipeline definition in kibana it is processed as expected.

output.elasticsearch:
  hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
  username: ${FILEBEAT_USERNAME}
  password: ${FILEBEAT_PASSWORD}
  pipelines:
    - pipeline: gloo
      when.equals:
        kubernetes.namespace: gloo

The Error:

Cannot index event publisher.Event{...}..."caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:874"}

Hey @subsonnic, welcome to discuss :slight_smile:

If the pipeline works, but the event cannot be indexed, there may be a problem with the mapping. The pipeline may be trying to store some value in a field with an incompatible datatype. For example it could be that the event includes an object in a field that is expected to be a string.

Do the error show the specific field producing this failure?

Not directly. But what I try to do is to replace the message field by the structured fields. Maybe that is the problem and I cannot do it because its not a seperate index for only gloo and there are other event logs which are not processed and store the message value as it is (filebeat, string) so I cannot change the maping type for it.

[
  {
    "json": {
      "field": "message",
      "if": "ctx.kubernetes.namespace == \"gloo\""
    }
  }
]

I changed it to

{
  "json": {
    "field": "message",
    "target_field": "gloo",
    "if": "ctx.kubernetes.namespace==\"gloo\""
  }
}

After the change I see another error

{"type":"illegal_argument_exception","reason":"com.fasterxml.jackson.core.JsonParseException: Unexpected character ('.' (code 46)): Expected space separating root-level values...}

But the try run still works

This seems to indicate that there is something that doesn't parse as JSON.

Is it possible that some of the lines of your gloo service are not JSON? Or that filebeat is splitting some of its JSON in multiple lines?

Do you have an example of document that works in the pipeline simulator, but doesn't seem to work when sent by filebeat?

You can also try to do this json parsing in the filebeat side, using the decode_json processor.

Ah thanks. It was really possible. Now its working for just a subset of pods inside the namespace. There are sometimes events logged by an 3rd party pod not maintained by gloo itself. I will build more seperated and more specified pipelines.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.