This is a pretty classic data synchronisation problem, and strategies here aren't particularly limited to Logstash/Elasticsearch
Do you have access to a transaction log, where you can detect a row being deleted from the SQL DB? If so, that will be your best bet.
Another option would be to tag each document with a "last seen" timestamp, and periodically go back and "collect" the documents that haven't been seen recently (this could be done in a separate Logstash pipeline with elasticsearch input and elasticsearch output).
Depending on the size of your dataset and how frequently you synchronize, you may be best off simply reading the data into a new index each time you import.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.