I am using filebeat-* index with some fields on Elasticsearch. I want to remove all data on the elasticsearch which I used but that index remains. It means that without deleting index name and contents (available fields) of it but data would be deleted.
Some people say that
curl -XDELETE 0:9200/filebeat-*
could be useful. I suppose that the index will be deleted. How can I solve this problem?
Actually, I am using Zeek to extract files from PCAP as offline. Extracted data are visualized on the Elasticsearch using Kibana. I have an one index name as 'filebeat'.
When I make content-related changes on my own Zeek, I can not see these changes on Elastic. Because this data was included (visualized) before. That's why I want to completely delete the extracted data on Elastic. I want to delete all the data without deleting the index name so that there is no problem in filebeat while deleting the data. @warkolm
@SFD13 you can surely delete the index if you are re-ingesting the whole dump of data again into ES. When filebeat pushes the logs, ES will anyway check if the index is present or not, and if not, it is going to create a new index for you.
The only issue here could be if there some specific index name you might be using in your application, in which case, the index name could be set on filebeat configuration.
PS: there is no point in deleting all the data and keeping the index. ES only creates an index when the very first document/ event/ log for the index is ingested and stored.
Deleting an index and restarting filebeat are two disparate things. You delete an index in Elasticsearch and it has none whatsover impact on Filebeat, the only thing impacted would be your logs which essentially would be deleted and won't be available in Kibana or any other client.
When you start filebeat ,it will run normally, reading the logs, enriching them based on your processors (if any) and then sending them to Elasticsearch. By default, filebeat sets the value for ctx._index which tells ES in which index this particular document/event/log must be stored. Not that, filebeat itself is not storing the log or document into the index, it is Elasticsearch which writes and stores it.
So essentially, when you delete the index, it is removed form ES, when you spin up your filebeat, it will send logs to ES for storing into ctx._index index. Elasticsearch will check if the index specified in the field ctx._index exists or not. If it exists, the document is added to that index, if not, it will create a new index with that name (value of ctx._index field) and store the document in that index.
sudo filebeat setup essentially configures the index template (index setting and property mappings), ILM policy, etc. in ES that will be used when new index created based on that template. It is executed on ES when filebeat is started, but it won't change anything if templates and policies themselves have not been modified. Also, this command doesn't impact the reading and sending of logs, albeit it defines how the documents are stored and how indices are managed.