Hey there, I configured Fleet in the console, installed/enrolled an agent down on my server and configured an integration.. Moments later i got logs and kibana was happy.
I'm an idiot and accidently deleted the ingest pipeline in Kibana and the logs are now going unstructured into logs-elastic_agent.filebeat.
I read that Fleet automatically installs the ingest pipelines when it first deploys a module to an agent, so i removed and re-added the agent, however i don't think it is re-installing the pipeline.
I looked for the pipeline json, but in official elastic github, can only find the yaml versions.
Hmmmm.... Something besides the obvious seems a bit wrong it should be failing with a missing pipeline I would think (unless we added some safety logic for that).
What version are you on?
Do have the exact name of the pipeline that you deleted?
You could try installing the agent on another box?
I updated your Subject Line perhaps an agent specialist will chime in.
It probably thinks that integration is correctly installed....
I found this in /opt/Elastic/Agent/data/elastic-agent-1428d5/logs/default/filebeat-json.log i am unsure if it is relevant.
{"log.level":"debug","@timestamp":"2021-08-07T08:27:16.214+1000","log.logger":"processors","log.origin":{"file.name":"processing/processors.go","file.line":128},"message":"Fail to apply processor client{add_index_pattern=logs-panw.panos-default
Also that does NOT look like a good error... did you touch any of the agent side ingest / processors? Although OTH that is probably because the index pattern is already there... so maybe not so bad... hard to tell..
Hi @stephenb thanks for your help today. I upgraded the cluster to 7.14.0, reinstalled everything and it IS gettting data into Elasticsearch. There is now a Grok error.. but i'll raise a new ticket for that.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.