@luiz.santos we have seen logs at logstash, saying, bulk request failed and its retrying. But, from those logs the actual reason wasn't clear. I have read about version conflict exception, we never saw any version conflict exception in the logs.
We observed that memory utilization was 100% in both the data nodes during the period when we lost data.
We had set retry_on_conflict to 5 through logstash es output plugin.
Bulk request queue was increased to 200 from 50, the queue used to hit the 200 limit sometimes.
We upgraded es version from 5.2.0 to 5.5.x due issue with circuit breakage.
Last time this data loss occurred, we upgraded the number of cores and memory (we were having low configuration, h/w upgrade was due, anyway). After upgrading the hardware (its been a week) there had been no data loss.