I moved from ELK v6.8.16 to ELK v7.13.2 and found the following issues: -
- In v6.8.16, when I removed a namespace, Kibana could show such event but it is not happening in v7.13.2
- The navigation on Kibana web is faster in v7.13.2 compared w/ v6.8.16 but it takes longer time to do a search in v7.13.2
I'm a newbie to ELK stack. Please advise. Thanks a lot.
Welcome to our community!
It's not clear to me what you mean in your first point. Can you elaborate a little more?
As for your second point, how are you measuring this?
Thanks for your reply, warkolm.
Point 1: i issued "kubectl delete -f ns ibsl" command and such event will be recorded, sent to, and displayed on elasticsearch/kibana v6.8.16.
But I couldn't see the event in v7.13.2.
For point 2, both ELK stacks were cloned from the same VM instance, and hence, they are of the same hardware spec. 8core cpu + 24g RAM, ubuntu 16.04. when I clicked "Discover" and "Refresh" on Kibana, ELK stack v6.8.16 takes 1 second to show the results of last 15 mins while ELK stack v7.13.2 takes 3 seconds to do the same thing.
Just to be clear, you saw it in 6.8, but after upgrading to 7.13 it's not showing?
Does the index it was in still exist?
no. i have both stacks up and running. both are new installations. no migration. when i compared two i found it. thanks.
Ok, well if you didn't copy the indices over, or send the existing data to the new cluster, then that's not surprising.
but i tested it separately.
k8s cluster 1 is sending logs to v6.8
k8s cluster 2 is sending logs to v7.13
that is a new kubectl command i issued.
event was logged on v6.8 but not on v7.13
i'm wondering if such event was filtered on logstash or filebeat or any component? any way I can enable to accept all logs from k8s?
And you issued that command on both kubernetes clusters? Did you confirm that the second cluster actually created that in it's logging output?
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.