Downward spiral of free memory and too many ES requests

(It might be ES centered issue so thought to ask here first. Plus I've seen
same issue thread but none of the question posters followed up.)

Issue:

  1. Free ram keeps on decreasing few minutes after log indexing starts.
  2. Consequently 'Too many active ES requests, blocking now.",
    :inflight_requests=>15000, :max_inflight_requests=>15000, :level=>:info
    '
    pops up in logstash log file.
    This according to Jordan from herehttps://groups.google.com/forum/?fromgroups=#!searchin/logstash-users/too$20many$20es$20requests/logstash-users/zNCSD90auxY/UL0NGm3xX4cJ is
    an ES bottleneck because it can't handle the data flow. And thus all
    storage and indexing of logs start to fail.

Background:
Mine is a test environment for logstash (1.1.9) with a standalone ES server
0.20.6 set up from .deb package. No ES cluster;
just logstash and ES running on same debian squuze server. I have Logstash
listening on a text file 'mail.log' and a
zcat program outputting postfix logs into that file. So logstash picks up
logs from the file and dumps it into ES. This is
how:

zcat mail.log.2013-03-20.gz | pv -L1M -pr > mail.log

So ES will be hit with 1M of log data per second. Roughly that amounts to
5600 lines of postfix log entries.

My ES configs relevant to issue:

ES_HEAP_SIZE=3500m
MAX_OPEN_FILES=65535
MAX_LOCKED_MEMORY=unlimited
bootstrap.mlockall: true

My server resources:

RAM Total: 8G
cpu: 4 core
Java from Oracle: Java(TM) SE Runtime Environment (build 1.7.0_17-b02)
Java HotSpot(TM) 64-Bit Server VM (build
23.7-b01, mixed mode)

When the servers - logstash and ES - are started both take up approx 1G and
3.8G of memory respectively. When logs start pouring in memory
consumption of logstash is almost constant throughout but that of ES goes
on increasing. So when free memory becomes <1G the message
I stated in (2) starts popping up in the log file.

Question:

  1. Is this due to insufficient server resources? OR ES is capable to handle
    this amount of data flow with these server resources I'd need to tweak ES
    configs more?

Thank you.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Any hint guys?

re,

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Any hint guys?

Basically you may ignore all the details there if you wish. It is the *growing
memory consumption of ES *I'm worried about.
I mean it should be easily doable to process 1MB (approx 5600 log entries)
per second with this configuration of ES with
almost steady memory consumption but it is just not so.

Re,

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hello,

I suggest you try and tweak your setup for more indexing throughput. I'd
use the elasticsearch_http with the bulk API, as currently the
elasticsearch output doesn't use bulk indexing. By default, the bulk size
for elasticsearch_httphttp://logstash.net/docs/1.1.10/outputs/elasticsearch_httpis
100, but for your setup I'd start with 1000.

Using elasticsearch_http will also let you upgrade your ES to the latest
version, and I'd suggest you try 0.90 because it should be faster and your
indexes will be smaller.

If that still doesn't help, take a look at your refresh_interval
http://www.elasticsearch.org/guide/reference/api/admin-indices-update-settings/value
and try to increase it if your use-case permits.

I think this pretty much all the major tweaking you can do, so if you still
have issues, I think you'll have to do what Jordan suggested - throw more
hardware at it. But it's not clear to me what's your indexing requirement.
You say you have one Logstash instance per rack pushing 250 logs per
second. How many racks you have? With your hadware and proper configuration
I'd expect you to be able to index 10-30k docs/s with the bulk API. Depends
a lot on how big are your logs, how you want to analyze them, and how often
you refresh, but I'm just throwing some rough numbers so you'll know better
where you are (eg: if you need 50k docs/s, you'll probably need more
hardware, no matter the configuration tweaks).

Speaking of tweaks: your indexing throughput also depends on your existing
index size, so you want to make sure that you rotate them often enough.
Because of merging, the bigger your index is, the slower it will be to add
new documents. You don't want to have a huge amount of indices, though,
because your searches will hit many of them and you'll get slow response
times.

Best regards,
Radu

http://sematext.com/ -- Elasticsearch -- Solr -- Lucene

On Thu, Apr 18, 2013 at 10:44 PM, vims ksubins321@gmail.com wrote:

Any hint guys?

Basically you may ignore all the details there if you wish. It is the *growing
memory consumption of ES *I'm worried about.
I mean it should be easily doable to process 1MB (approx 5600 log entries)
per second with this configuration of ES with
almost steady memory consumption but it is just not so.

Re,

You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I'd use the elasticsearch_http with the bulk API, as currently the
elasticsearch output doesn't use bulk indexing. By default, the bulk
size for elasticsearch_http
http://logstash.net/docs/1.1.10/outputs/elasticsearch_http is 100,
but for your setup I'd start with 1000.
elasticsearch_http plugin was seen 'beta' in Logstash which was why I
refrained from testing it for future
production setup. But I guess you're more right. I should test with this
plugin.
If that still doesn't help, take a look at your refresh_interval
http://www.elasticsearch.org/guide/reference/api/admin-indices-update-settings/value
and try to increase it if your use-case permits.

This applies only if the input mode to ES is via elasticsearch_http right?

Speaking of tweaks: your indexing throughput also depends on your
existing index size, so you want to make sure that you rotate them
often enough. Because of merging, the bigger your index is, the slower
it will be to add new documents.

I assume you're mentioning about this call:

curl -XPOST 'http://localhost:9200/twitter/_optimize'

I guess writing a script firing this call and setting up with cron would
do good, right?
But how often do we need to 'optimize' an index. In my case I have ES
creating indexes with
default names with 'current date'. So all data transferred to ES will be
under a single index
to speak for.

Re,

assuming 'twitter' is the name of an index.

Best regards,
Radu

http://sematext.com/ -- Elasticsearch -- Solr -- Lucene

On Thu, Apr 18, 2013 at 10:44 PM, vims <ksubins321@gmail.com
mailto:ksubins321@gmail.com> wrote:

Any hint guys?

Basically you may ignore all the details there if you wish. It is
the *growing memory consumption of ES *I'm worried about.
I mean it should be easily doable to process 1MB (approx 5600 log
entries) per second with this configuration of ES with
almost steady memory consumption but it is just not so.

Re,
-- 
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearch+unsubscribe@googlegroups.com
<mailto:elasticsearch%2Bunsubscribe@googlegroups.com>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/OR_x2V16uCQ/unsubscribe?hl=en-US.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

On Mon, Apr 22, 2013 at 2:55 PM, sub ksubins321@gmail.com wrote:

I'd use the elasticsearch_http with the bulk API, as currently the
elasticsearch output doesn't use bulk indexing. By default, the bulk size
for elasticsearch_httphttp://logstash.net/docs/1.1.10/outputs/elasticsearch_httpis 100, but for your setup I'd start with 1000.

elasticsearch_http plugin was seen 'beta' in Logstash which was why I
refrained from testing it for future
production setup. But I guess you're more right. I should test with this
plugin.

If that still doesn't help, take a look at your refresh_interval
http://www.elasticsearch.org/guide/reference/api/admin-indices-update-settings/value
and try to increase it if your use-case permits.

This applies only if the input mode to ES is via elasticsearch_http
right?

Actually, it should apply to both. But I would expect bigger impact when
you index in bulks, because indexing one by one has a lot to do with the
transport overhead, which is unrelated to the refresh interval.

Speaking of tweaks: your indexing throughput also depends on your
existing index size, so you want to make sure that you rotate them often
enough. Because of merging, the bigger your index is, the slower it will be
to add new documents.

I assume you're mentioning about this call:

curl -XPOST 'http://localhost:9200/twitter/_optimize'

I guess writing a script firing this call and setting up with cron would
do good, right?
But how often do we need to 'optimize' an index. In my case I have ES
creating indexes with
default names with 'current date'. So all data transferred to ES will be
under a single index
to speak for.

Re,

assuming 'twitter' is the name of an index.

Ah, no, I wasn't talking about optimize. Actually, optimizing an index
that's constantly changing isn't usually a good idea because you'll
invalidate caches, and then new indexing will invalidate caches once again.
Let the merging policy do that.

I was talking about the time-based indices that Logstash does by default.
Like having one index per day. The total number of indices you have should
be a balance of how big you allow indices to grow and how many indices
would your regular search hit.

For example, if you have a lot of traffic (and you seem to have), you only
keep logs for 2 days and 90% searches are in the last hour, it might makes
sense to have hourly indices. On the other hand, if you keep logs for 3
months and searches usually look into last week, you might want weekly
indices, and so on.

It makes sense to optimize "old" indices (ie: indices that don't change).
That would save your disk space and make searches on such indices faster
(once caches warm up again).

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

On Mon, Apr 22, 2013 at 5:25 PM, sub ksubins321@gmail.com wrote:

I'd use the elasticsearch_http with the bulk API, as currently the
elasticsearch output doesn't use bulk indexing. By default, the bulk size
for elasticsearch_http is 100, but for your setup I'd start with 1000.

elasticsearch_http plugin was seen 'beta' in Logstash which was why I
refrained from testing it for future
production setup. But I guess you're more right. I should test with this
plugin.

If that still doesn't help, take a look at your refresh_interval value and
try to increase it if your use-case permits.

This applies only if the input mode to ES is via elasticsearch_http right?

No, this is a general setting.

Speaking of tweaks: your indexing throughput also depends on your existing
index size, so you want to make sure that you rotate them often enough.
Because of merging, the bigger your index is, the slower it will be to add
new documents.

I assume you're mentioning about this call:

curl -XPOST 'http://localhost:9200/twitter/_optimize'

I guess writing a script firing this call and setting up with cron would do
good, right?
But how often do we need to 'optimize' an index. In my case I have ES
creating indexes with
default names with 'current date'. So all data transferred to ES will be
under a single index
to speak for.

No, he was not talking about optimize. He is just saying that avoid
indices being very large.

Anyways, optimize is one thing you should do. You don't optimize the
current index. Optimize the indices that you know you won't write to
it, for ex, yesterday's index. Optimizing a index decreases the number
of segments for it. (depending on what option you give while
optimizing). This helps in making your searches faster and reducing
the number of open file descriptors.

Re,

assuming 'twitter' is the name of an index.

Best regards,
Radu

http://sematext.com/ -- Elasticsearch -- Solr -- Lucene

On Thu, Apr 18, 2013 at 10:44 PM, vims ksubins321@gmail.com wrote:

Any hint guys?

Basically you may ignore all the details there if you wish. It is the
growing memory consumption of ES I'm worried about.
I mean it should be easily doable to process 1MB (approx 5600 log entries)
per second with this configuration of ES with
almost steady memory consumption but it is just not so.

Re,

You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/OR_x2V16uCQ/unsubscribe?hl=en-US.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
Regards,
Abhijeet Rastogi (shadyabhi)
http://blog.abhijeetr.com

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Thanks guys. With your suggestions and some reading from Aaron's blog
http://untergeek.com/2012/11/05/my-current-templatemapping/ I'm using
elasticsearch_http
http://logstash.net/docs/1.1.10/outputs/elasticsearch_http for
indexing data into ES from Logstash with a custom template.
The ES memory usage issue seems to have subsided, occupying a constant
3.2G.

I'm running into an issue now. As a test run I indexed some data from a
text
file containing 290,554 log entries at the rate of approx 1200logs/sec.
A random
search query showed that some logs were missing as if lost in transition
from
Logstash to ES. What could it be?

The elasticsearch_http config in logstash is:

output {
elasticsearch_http {
host => "10.0.4.24"
flush_size => 11000
type => "postfix"
}
}

My custom template:

curl -XPUT http://localhost:9200/_template/loggerstash -d '
{
"template" : "logstash-*",
"settings" : {
"number_of_shards" : 1,
"index.cache.field.type" : "soft",
"index.refresh_interval" : "3s",
"index.store.compress.stored" : true }
}'

I'm guessing something 'flush_size' related? or its' combination with
'index.refresh_interval'?
The refresh_interval was 30s earlier, reduced it to 3s but still
constitutes for missing logs.

Re,

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

How are you searching your docs? Is it using Kibana? Analyzers can do
unexpected things if you don't know what you are doing.

For ex, I had a trouble where I couldn't search for "from=<>" when I
used standard analyzer at @message field.

Another case could be where you have already defined a type for a
field and logstash is sending a non-compatible type in ES. ES is
schema-less but only till the point where it decides what type is the
field at the first time it encounters it. Check your logs.

And no, it can't be related to flush_size.

On Sat, Apr 27, 2013 at 10:23 PM, sub ksubins321@gmail.com wrote:

Thanks guys. With your suggestions and some reading from Aaron's blog I'm
using
elasticsearch_http for indexing data into ES from Logstash with a custom
template.
The ES memory usage issue seems to have subsided, occupying a constant
3.2G.

I'm running into an issue now. As a test run I indexed some data from a text
file containing 290,554 log entries at the rate of approx 1200logs/sec. A
random
search query showed that some logs were missing as if lost in transition
from
Logstash to ES. What could it be?

The elasticsearch_http config in logstash is:

output {
elasticsearch_http {
host => "10.0.4.24"
flush_size => 11000
type => "postfix"
}
}

My custom template:

curl -XPUT http://localhost:9200/_template/loggerstash -d '
{
"template" : "logstash-*",
"settings" : {
"number_of_shards" : 1,
"index.cache.field.type" : "soft",
"index.refresh_interval" : "3s",
"index.store.compress.stored" : true }
}'

I'm guessing something 'flush_size' related? or its' combination with
'index.refresh_interval'?
The refresh_interval was 30s earlier, reduced it to 3s but still constitutes
for missing logs.

Re,

--
Regards,
Abhijeet Rastogi (shadyabhi)
http://blog.abhijeetr.com

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

How are you searching your docs? Is it using Kibana? Analyzers can do
unexpected things if you don't know what you are doing.

For ex, I had a trouble where I couldn't search for "from=<>" when I
used standard analyzer at @message field.

Yes I'm using Kibana. But I do not have any analyzers configured at all.

Another case could be where you have already defined a type for a
field and logstash is sending a non-compatible type in ES. ES is
schema-less but only till the point where it decides what type is the
field at the first time it encounters it. Check your logs.

Sorry I couldn't get you on this. Both logstash and ES logs show no signs of
error.

Re,

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

On Mon, Apr 29, 2013 at 1:44 PM, sub ksubins321@gmail.com wrote:

How are you searching your docs? Is it using Kibana? Analyzers can do
unexpected things if you don't know what you are doing.

For ex, I had a trouble where I couldn't search for "from=<>" when I
used standard analyzer at @message field.

Yes I'm using Kibana. But I do not have any analyzers configured at all.

When you don't have anything configured, @message is most probably
standard analyzer. Use
GitHub - polyfractal/elasticsearch-inquisitor: Site plugin for Elasticsearch to help understand and debug queries. plugin to see
how a "text" is analyzed & see if that's the case here.

Another case could be where you have already defined a type for a
field and logstash is sending a non-compatible type in ES. ES is
schema-less but only till the point where it decides what type is the
field at the first time it encounters it. Check your logs.

Sorry I couldn't get you on this. Both logstash and ES logs show no signs of
error.

What I meant was, if suppose you send ES a field named char_count with
value 10. ES will see this field and guess it as a "long" and save it
accordingly. So, ES now expects all further values for field
char_count to be number, if it's not a "long", then ES will fail to
save that document in the index. I was guessing that if you had
defined custom mappings, things like this could have happened.

Re,

--
Regards,
Abhijeet Rastogi (shadyabhi)
http://blog.abhijeetr.com

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Maybe it's because of the way flush_size currently works. You've set it to
11000, which means it will send new logs to ES only when the 11000-logs
queue gets full, then waits for it to get full again and so on.

So if you're testing and sending, say a burst of 20000 logs, you'll see
only 11000 of them (after an index refresh) and you'll be missing 9000,
until you add 2000 more logs to trigger a new flush from elasticsearch_http.

Best regards,
Radu

On Mon, Apr 29, 2013 at 11:51 AM, Abhijeet Rastogi
abhijeet.1989@gmail.comwrote:

On Mon, Apr 29, 2013 at 1:44 PM, sub ksubins321@gmail.com wrote:

How are you searching your docs? Is it using Kibana? Analyzers can do
unexpected things if you don't know what you are doing.

For ex, I had a trouble where I couldn't search for "from=<>" when I
used standard analyzer at @message field.

Yes I'm using Kibana. But I do not have any analyzers configured at all.

When you don't have anything configured, @message is most probably
standard analyzer. Use
GitHub - polyfractal/elasticsearch-inquisitor: Site plugin for Elasticsearch to help understand and debug queries. plugin to see
how a "text" is analyzed & see if that's the case here.

Another case could be where you have already defined a type for a
field and logstash is sending a non-compatible type in ES. ES is
schema-less but only till the point where it decides what type is the
field at the first time it encounters it. Check your logs.

Sorry I couldn't get you on this. Both logstash and ES logs show no
signs of
error.

What I meant was, if suppose you send ES a field named char_count with
value 10. ES will see this field and guess it as a "long" and save it
accordingly. So, ES now expects all further values for field
char_count to be number, if it's not a "long", then ES will fail to
save that document in the index. I was guessing that if you had
defined custom mappings, things like this could have happened.

Re,

--
Regards,
Abhijeet Rastogi (shadyabhi)
http://blog.abhijeetr.com

--
http://sematext.com/ -- Elasticsearch -- Solr -- Lucene

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Maybe it's because of the way flush_size currently works. You've set
it to 11000, which means it will send new logs to ES only when the
11000-logs queue gets full, then waits for it to get full again and so
on.

So if you're testing and sending, say a burst of 20000 logs, you'll
see only 11000 of them (after an index refresh) and you'll be missing
9000, until you add 2000 more logs to trigger a new flush from
elasticsearch_http.

Spot on! You are right. It is exactly the flush_size setting. So the
queue always waits for the number on flush_size to fill up until it
sends that batch
to elasticsearch.

ES v0.90 stable released 2 days ago. I'm trying it now and you were
right again. It shows much improved (i.e. limited) memory consumption
than v0.20.

Thanks and Regards,

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.