I am using filebeat/elasticsearch/kibana 7.10.0.
My filebeat runs on Kubernetes.
When I activate my ingest pipeline in my filebeat output config it runs into errors on client side. But if I test one event in my pipeline definition in kibana it is processed as expected.
If the pipeline works, but the event cannot be indexed, there may be a problem with the mapping. The pipeline may be trying to store some value in a field with an incompatible datatype. For example it could be that the event includes an object in a field that is expected to be a string.
Do the error show the specific field producing this failure?
Not directly. But what I try to do is to replace the message field by the structured fields. Maybe that is the problem and I cannot do it because its not a seperate index for only gloo and there are other event logs which are not processed and store the message value as it is (filebeat, string) so I cannot change the maping type for it.
Ah thanks. It was really possible. Now its working for just a subset of pods inside the namespace. There are sometimes events logged by an 3rd party pod not maintained by gloo itself. I will build more seperated and more specified pipelines.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.