I've set up elastic stack (7.8.0 now 7.8.1) in kubernetes and filebeat on my nodes. Everty time an index rolls over i get a lot of broken stuff. Every time i try to customize anything it breaks... Its so much that i suspect im doing something totally wrong...
How to set up filebeat from scratch? Do I really have to un-provision filebeat from all my nodes (mix of VMs in ESXi and raspberry PIs and stuff like syslog recievers) so they stop sending logs, so i can run filebeat setup on a "clean" elastic system? Otherwise some of my nodes always creates some broken "default" index when adding data (for example when i try to delete all indexes).
Every time index rolls over to "warm" state my visualizations start complaining about text fields not being indexable or something? 2 shards out of 3
When I upgraded filebeat/es from 7.8.0 to 7.8.1 i started getting "[esaggs] > "field" is a required parameter" and "saved "field" parameter is now invalid" etc. Also index lifecycle policy seems not to be applied to new indexes...
What is it that is totally wrong in my process?
Install elastic with helm chart on k3s, exposing ports with LoadBalancer
Rollout filebeat services with ansible on all my nodes
At some point run filebeat setup on the nodes (as you understand I cant control WHEN this is run because filebeat is always running on some node here or there)
Adjust filebeat ILM (2 days until warm phase)
Again: It works if i start from scratch: shut down all filebeats, remove all indexes, run filebeat setup.. until the index rolls over and a new one is created.. seems the newly created index doesnt have the same settings as the old one.
Yes I mean ILM. Sorry the previous errors are gone because of experimenting and removing/adding indexes. But the new errors don't manifest in any logs, instead i only get the kibana "popup error" on the dashboard "[Filebeat System] SSH login attempts ECS" which only says:
Saved "field" parameter is now invalid. Please select a new field.
I suspect this may have something to do with filebeat starting sending logs, without me having done "filebeat setup". But I don't know how to avoid this without removing all my filebeat nodes, and this will happen every time filebeat is upgraded (because it starting a new index filebeat-7.8.x....).
I had to remove all elastic data and recreate the whole stack to get rid of the broken "template_1" which apparently stopped all other templates from being applied. Now it is working!
No it didnt work. The only time the template showed up was in _cat/templates and in the error. Trying to delete it gave an error. Elastic storage must have gotten corrupted somehow (not surprising as I dont have the most reliable storage backend).
Just 404 not found i think. Impossible to delete but still existed! Got no error in elastic logs. But issue is resolved now and ES, kibana and filebeat working beautifully now
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.