I'm using Elastic 2.2 inside a Docker container. I have a 3 nodes setup each node on a separate machine. I'm using Elasticsearch both as main data repository and off course a search engine.
I run off a problem multiple times :
I'm Bulk indexing a huge amount of documents via bulk processor using the java API. Sometimes for machine maintenance reason i have to stop the running elastic image (docker stop elasticsearch).
This operation does not always succeed in a reasonable amount of time (i'm talking hours here) so the sysadmin just force kill the process in order to proceed. Usually it's the master node who does not respond to the stop routine call, while the 2 other nodes stops correctly.
When i start again the elastic image, the data is lost. Not only the data being injected and that was not may be flushed from the translog, but even existing data that I indexed days ago.
Any clue why this is happening ? I could understand losing data in the translog, but not existing data that was indexed days/weeks ago.