Customer reports documents 'disappearing' from 2 node index?!

Running Windows Server 2008 R2 with 2 nodes @ version 0.19.1 (JRE6u32).

We have only been given 2GB of memory for this virtual server (Which is
ridiculous, i know). We are now at about 270k documents and the customer
has reported that since Friday our record counts have dropped without user
intervention. Everything is stored in a single index and seperated on our
web application interface by "provider name". One provider set had 48
records, now reporting 36. Another has just been reported to be missing 95
records. So i know it is not specific to a dataset, but more specific to
the index as a whole.

I am curious as to how many documents our search nodes can hold, and how
JVM heap space contributes. Additionally this would be scaled
differently based on document size, no? We are always getting more data
(10-50k monthly) and I am having trouble keeping elasticsearch stable as
demand for document storage grows. I am at the point now where I am
willing to push for paid support, but not sure if that is entirely
neccesary at this point.

We are currently checking the back end RDBMS that maintains all of the
original documents and user data for activity that might lead to the record
loss, but it is looking like there was no user intervention that caused
these documents to disappear. We have been running ES for about a year and
a half now and have not seen this before. Any ideas?

Thanks in advance,
Charles

--

default-mapping.json:

{
"default": {
"dynamic_templates": [{
"index_nonanalyzed": {
"match": "*",
"mapping": {
"index": "not_analyzed"
}
}
}]
}
}

--

Log files on the production system from Aug-17 reported JVM was out of
memory. We are using the Tanuki service wrapper, so I had the production
support staff look at the maximum memory limit in the JVM wrapper
configuration. It was commented out (#). I assume this allowed the JVM
to grow indefinitely, or until the OS constrained.

The RDBMS showed all records were in fact indexed at some point, so I
assumed they were still in the /data folder and had perhaps been lost in
RAM. After setting the JVM limits to 512MB, and restarting the nodes all
records are back. (phew!)

We do not have mlock enabled, and I am not sure how efficient mlock would
be running in VMware anyway. Moving forward if anyone has recommendations
on managing growth better I would like to hear your comments.

Thanks,
Charles

--