Running Windows Server 2008 R2 with 2 nodes @ version 0.19.1 (JRE6u32).
We have only been given 2GB of memory for this virtual server (Which is
ridiculous, i know). We are now at about 270k documents and the customer
has reported that since Friday our record counts have dropped without user
intervention. Everything is stored in a single index and seperated on our
web application interface by "provider name". One provider set had 48
records, now reporting 36. Another has just been reported to be missing 95
records. So i know it is not specific to a dataset, but more specific to
the index as a whole.
I am curious as to how many documents our search nodes can hold, and how
JVM heap space contributes. Additionally this would be scaled
differently based on document size, no? We are always getting more data
(10-50k monthly) and I am having trouble keeping elasticsearch stable as
demand for document storage grows. I am at the point now where I am
willing to push for paid support, but not sure if that is entirely
neccesary at this point.
We are currently checking the back end RDBMS that maintains all of the
original documents and user data for activity that might lead to the record
loss, but it is looking like there was no user intervention that caused
these documents to disappear. We have been running ES for about a year and
a half now and have not seen this before. Any ideas?
Thanks in advance,