How to stop and remove filebeat from kubernetes? (eks)

Hello
i installed filebeat from this link :


and its working.
now i like to remove it from the eks cluster.
how can i stop it and remove it ?
no info in documents
Thanks

Did any of the answers here help - https://www.reddit.com/r/elasticsearch/comments/kpneeb/how_to_uninstall_elk_filebeat_from_kubernetes/?

Lets see

Been a while but take a look at this.... but use at your own risk... do NOT try on production first!!!

# Cleanup
#### List the pods
kubectl get pods  -n kube-system --no-headers=true | awk '/filebeat/{print $1}'
### CAREFUL THIS DELETES 
kubectl get pods  -n kube-system --no-headers=true | awk '/filebeat/{print $1}' |  xargs  kubectl delete -nkube-system pod
kubectl --namespace=kube-system delete ds/filebeat
kubectl --namespace=kube-system delete configmap/filebeat-config
kubectl --namespace=kube-system delete clusterrolebinding.rbac.authorization.k8s.io/filebeat
kubectl --namespace=kube-system delete clusterrole.rbac.authorization.k8s.io/filebeat
kubectl --namespace=kube-system delete serviceaccount/filebeat

kubectl --namespace=kube-system delete configmap/filebeat-setup-config
kubectl --namespace=kube-system delete job/filebeat-setup

in fact the solution is much simpler if you using yml file to install it
you just do
kubectl delete -f filebeat.yml
Thanks!

2 Likes

Does that clear out all the configmaps etc?

clear all , then i can install again

1 Like

@stephenb
you know what i m not sure as from when i used the delete
I'm keep getting :

f2021-01-05T10:52:03.776Z        INFO    instance/beat.go:392    filebeat stopped.
2021-01-05T10:52:03.776Z        ERROR   instance/beat.go:956    Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).
Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).

That means it is not completely cleaned up... Try my steps see what happens.

this command gives me nothing i guess it do erase everything.
but installing it in a complex system is not straightforward and has allot of problems
i still struggle to make it work

That command would have just listed the filebeat pods that are present.
If nothing returns then there are not filebeat pods.
It looks like the data path is not cleaned up, did you put the data on a persitent disk?
I have not looked at this in a while apologies for not being able to directly answer.

@stephenb
i checked nad they Are written to the disc on the hosting nodes under /var/containers/*
so you suggest to :
kubectl delete -f filbeats.yml
ssh to the nodes and rm -rf to every thing under /var/containers /* ?

But i noticed that when i delete the filebeat , the filebeat log also deleted from the node disk
And when created again it is created , so i dont think its the problem

@stephenb
this is what helped in the end see my last reply

I would be careful with that as you may delete other containers data, I would only delete the containers related to filebeat if / when I get a chance I will check, but I did not have this problem on a vanilla GKS (google) env.

Thanks for the update.

i didn't do this ... ( i alomst did )
the link i sent was the solution
by the way ...
is there way just to restart elasticsearch and all its process ( beats ) without uninstalling ?
just doing simple restart in EKS ?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.