We are preparing a pipeline, responsible for getting the 'current state' of data in our source table.
What it means is that the source table contains let's say 10 records. We would like to reflect those 10 records in Elasticsearch using Logstash to pull them every 10 minutes. During the day, the number of names might change, when for example someone will be deleted. So once the Logstash will run and will pull those 9 records, we would like to have it reflected in Elasticsearch with an index with 9 documents. We don't need old documents, as we want to see the 'current state'. We've been thinking about a mechanism that will truncate/delete index before new data will be pushed, but I'm not sure how we could achieve that using only Logstash and Elasticsearch and making sure that the data will always be present in an index.
Is that achievable in an automatic manner?