Downward spiral of free memory and too many ES requests

Maybe it's because of the way flush_size currently works. You've set
it to 11000, which means it will send new logs to ES only when the
11000-logs queue gets full, then waits for it to get full again and so
on.

So if you're testing and sending, say a burst of 20000 logs, you'll
see only 11000 of them (after an index refresh) and you'll be missing
9000, until you add 2000 more logs to trigger a new flush from
elasticsearch_http.

Spot on! You are right. It is exactly the flush_size setting. So the
queue always waits for the number on flush_size to fill up until it
sends that batch
to elasticsearch.

ES v0.90 stable released 2 days ago. I'm trying it now and you were
right again. It shows much improved (i.e. limited) memory consumption
than v0.20.

Thanks and Regards,

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.