I'll address Logstash's message delivery concerns here. Elasticsearch is out of scope for this, but you can easily find our current progress and work here: https://www.elastic.co/guide/en/elasticsearch/resiliency/current/index.html
If you have specific questions for Elasticsearch, we can address them in Elasticsearch category in discuss
@warkolm is correct. We are working on making LS resilient to crashes w.r.t. message losses. Today, LS does not offer any message guarantees like at least once or at most once. The software's core philosophy is to not lose messages intentionally, so we try hard to fix bugs which result in crashes. For this reason, the internal buffer between the different stages is capped to 20 events. So at most, there are 40 events in-flight (in memory) in LS that can be lost when there is a hard crash. Using certain message brokers between the LS stages (shippers and indexers) can mitigate this message loss. For example, if you use Apache Kafka, it provides a way to replay messages which were not committed to Zookeeper (using the LS input). LS 1.5 natively supports Kafka, so this is an option.
For 2.0, we are working on persisting these in-flight messages to disk, so they can be recovered after a hard crash
You can track our work regarding this using the reliability label: https://github.com/elastic/logstash/labels/resiliency