I'd like to know how to wipe data from an ELK stack (6.4.0) I'm running (on Ubuntu Server 18.04) without tossing everything and starting over from scratch. What files? Etc. In particular, when I relaunch Kibana, I don't want to see any vestiges of what was in there previously.
Make sure you're not sending new data into the cluster, first. Then, you could do it by either issuing a call to the API:
curl -XDELETE localhost:9200/_all
Or by stopping the elasticsearch service and then going into your data directory and deleting all data.
If you're running Security with X-pack, this will render your cluster inaccessible, since you're deleting the security index on top of everything else.
Great, this sounds easily done. Shut down Filebeat, use the curl command. I'll take this opportunity to play with Elasticsearch anyway that way, finding its data directory, etc. since part of this trip is to get my arms all the way around all of this. (Oh, yeah, I'm not using security or X-pack stuff yet.)
Ah, hence indices: Elasticsearch's data is on the path /var/lib/elasticsearch/nodes and there's a subdirectory named indices. Now I understand the first answer.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.