And the complete custom ingest pipeline not just the grok pattern.
Also since you are using an Integration it will try to run the default pipeline first so are you sure that error message is not from the default pipeline which run first?
Did you see if the event.original is still available in the document?
Are you sure your pipeline is not working are the fields there?
You can add a simple set processor in your custom pipeline to make sure it is run...
Screenshots are very hard to debug... Actual text results are much better.
You can also go Kibana - Stack Management - Ingest Pipeline
Pick your custom pipeline... And edit it and then test with existing document with the pipeline tester.. you will need the index name and _id which you can get from Discover
What was the issue and resolution? it can help another user if you explain what the issue was and how you solved it ... as a good member of the community
Because someone has configured it incorrectly according to the original config file of haproxy.cfg. And I was able to promptly backup and configure the haproxy.cfg file to match the fields corresponding to elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.