Elasticsearch memory usage

Hello,

I have been using elasticsearch on a ubuntu server for a year now, and
everything was going great. I had an index of 150,000,000 entries of domain
names, running small queries on it, just filtering by 1 term no sorting no
wildcard nothing. Now we moved servers, I have now a CentOS 6 server, 32GB
ram and running elasticserach but now we have 2 indices, of about 150
million entries each 32 shards, still running the same queries on them
nothing changed in the queries. But since we went online with the new
server, I have to restart elasticsearch every 2 hours before OOM killer
kills it.

What's happening is that elasticsearch starts using memory till 50% then it
goes back down to about 30% gradually then starts to go up again gradually
and never goes back down.

I have tried all the solutions I found on the net, I am a developer not a
server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.13=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",
"build_timestamp" : "2014-02-12T16:18:34Z",
"build_snapshot" : false,
"lucene_version" : "4.6"
}

Using elastica PHP

I have tried playing with values up and down to try to make it work, but
nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

You wrote, the OOM killer killed the ES process. With 32g (and the swap
size), the process must be very big. much more than you configured. Can you
give more info about the live size of the process, after ~2 hours? Are
there more application processes on the box?

Jörg

On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

Hello,

I have been using elasticsearch on a ubuntu server for a year now, and
everything was going great. I had an index of 150,000,000 entries of domain
names, running small queries on it, just filtering by 1 term no sorting no
wildcard nothing. Now we moved servers, I have now a CentOS 6 server, 32GB
ram and running elasticserach but now we have 2 indices, of about 150
million entries each 32 shards, still running the same queries on them
nothing changed in the queries. But since we went online with the new
server, I have to restart elasticsearch every 2 hours before OOM killer
kills it.

What's happening is that elasticsearch starts using memory till 50% then
it goes back down to about 30% gradually then starts to go up again
gradually and never goes back down.

I have tried all the solutions I found on the net, I am a developer not a
server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.13=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",
"build_timestamp" : "2014-02-12T16:18:34Z",
"build_snapshot" : false,
"lucene_version" : "4.6"
}

Using elastica PHP

I have tried playing with values up and down to try to make it work, but
nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Hello Jörg

Thanks for the reply, our swap size is 2g. I don't know at what % the
process is being killed as the first time it happened I wasn't around, and
then I never let that happen again as the website is online. After 2 hours
of running the memory in sure is going up to 60%, I am restarting each time
when it arrives at 70% (2h/2h30) when I am around and testing config
changes. When I am not around, I am setting a cron job to restart the
server every 2 hours. Server has apache and mysql running on it too.


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:22 PM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

You wrote, the OOM killer killed the ES process. With 32g (and the swap
size), the process must be very big. much more than you configured. Can you
give more info about the live size of the process, after ~2 hours? Are
there more application processes on the box?

Jörg

On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

Hello,

I have been using elasticsearch on a ubuntu server for a year now, and
everything was going great. I had an index of 150,000,000 entries of domain
names, running small queries on it, just filtering by 1 term no sorting no
wildcard nothing. Now we moved servers, I have now a CentOS 6 server, 32GB
ram and running elasticserach but now we have 2 indices, of about 150
million entries each 32 shards, still running the same queries on them
nothing changed in the queries. But since we went online with the new
server, I have to restart elasticsearch every 2 hours before OOM killer
kills it.

What's happening is that elasticsearch starts using memory till 50% then
it goes back down to about 30% gradually then starts to go up again
gradually and never goes back down.

I have tried all the solutions I found on the net, I am a developer not a
server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.13=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",
"build_timestamp" : "2014-02-12T16:18:34Z",
"build_snapshot" : false,
"lucene_version" : "4.6"
}

Using elastica PHP

I have tried playing with values up and down to try to make it work, but
nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn9nxn8mYhVYTwuBTs6ZS5YhoRYqh5tNaB0D2Lb8QHjHSg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Here's the top after ~1 hour running:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
780 root 20 0 317g 14g 7.1g S 492.9 46.4 157:50.89 java


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:36 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

Hello Jörg

Thanks for the reply, our swap size is 2g. I don't know at what % the
process is being killed as the first time it happened I wasn't around, and
then I never let that happen again as the website is online. After 2 hours
of running the memory in sure is going up to 60%, I am restarting each time
when it arrives at 70% (2h/2h30) when I am around and testing config
changes. When I am not around, I am setting a cron job to restart the
server every 2 hours. Server has apache and mysql running on it too.


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:22 PM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

You wrote, the OOM killer killed the ES process. With 32g (and the swap
size), the process must be very big. much more than you configured. Can you
give more info about the live size of the process, after ~2 hours? Are
there more application processes on the box?

Jörg

On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

Hello,

I have been using elasticsearch on a ubuntu server for a year now, and
everything was going great. I had an index of 150,000,000 entries of domain
names, running small queries on it, just filtering by 1 term no sorting no
wildcard nothing. Now we moved servers, I have now a CentOS 6 server, 32GB
ram and running elasticserach but now we have 2 indices, of about 150
million entries each 32 shards, still running the same queries on them
nothing changed in the queries. But since we went online with the new
server, I have to restart elasticsearch every 2 hours before OOM killer
kills it.

What's happening is that elasticsearch starts using memory till 50% then
it goes back down to about 30% gradually then starts to go up again
gradually and never goes back down.

I have tried all the solutions I found on the net, I am a developer not
a server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.13=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",
"build_timestamp" : "2014-02-12T16:18:34Z",
"build_snapshot" : false,
"lucene_version" : "4.6"
}

Using elastica PHP

I have tried playing with values up and down to try to make it work, but
nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8ioaFXJEzVSp8GRv%3DV3t0iNJx2wqp%3DKH5FXuo6raoEKA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Now the process went back down to 25% usage, from now on it will go back
up, and won't stop going up.

Sorry for spamming


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:37 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

Here's the top after ~1 hour running:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
780 root 20 0 317g 14g 7.1g S 492.9 46.4 157:50.89 java


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:36 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

Hello Jörg

Thanks for the reply, our swap size is 2g. I don't know at what % the
process is being killed as the first time it happened I wasn't around, and
then I never let that happen again as the website is online. After 2 hours
of running the memory in sure is going up to 60%, I am restarting each time
when it arrives at 70% (2h/2h30) when I am around and testing config
changes. When I am not around, I am setting a cron job to restart the
server every 2 hours. Server has apache and mysql running on it too.


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:22 PM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

You wrote, the OOM killer killed the ES process. With 32g (and the swap
size), the process must be very big. much more than you configured. Can you
give more info about the live size of the process, after ~2 hours? Are
there more application processes on the box?

Jörg

On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah <mallah.hicham@gmail.com

wrote:

Hello,

I have been using elasticsearch on a ubuntu server for a year now, and
everything was going great. I had an index of 150,000,000 entries of domain
names, running small queries on it, just filtering by 1 term no sorting no
wildcard nothing. Now we moved servers, I have now a CentOS 6 server, 32GB
ram and running elasticserach but now we have 2 indices, of about 150
million entries each 32 shards, still running the same queries on them
nothing changed in the queries. But since we went online with the new
server, I have to restart elasticsearch every 2 hours before OOM killer
kills it.

What's happening is that elasticsearch starts using memory till 50%
then it goes back down to about 30% gradually then starts to go up again
gradually and never goes back down.

I have tried all the solutions I found on the net, I am a developer not
a server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.13=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",
"build_timestamp" : "2014-02-12T16:18:34Z",
"build_snapshot" : false,
"lucene_version" : "4.6"
}

Using elastica PHP

I have tried playing with values up and down to try to make it work,
but nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn98%3DxVN7kL%2BWzcYRyVd2XF93Fk4sTMZ0%3DA_K8yHVNDFoQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Can you gist up the output of these two commands?

curl -XGET "http://localhost:9200/_nodes/stats"

curl -XGET "http://localhost:9200/_nodes"

Those are my first-stop APIs for determining where memory is being
allocated.

By the way, these settings don't do anything anymore (they were depreciated
and removed):

index.cache.field.type: soft
index.term_index_interval: 256
index.term_index_divisor: 5

index.cache.field.max_size: 10000

max_size was replaced with indices.fielddata.cache.size and accepts a
value like "10gb" or "30%"

And this is just bad settings in general (causes a lot of GC thrashing):

index.cache.field.expire: 10m

On Thursday, March 13, 2014 8:42:54 AM UTC-4, Hicham Mallah wrote:

Now the process went back down to 25% usage, from now on it will go back
up, and won't stop going up.

Sorry for spamming


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com <javascript:>
00961 700 49 600

On Thu, Mar 13, 2014 at 2:37 PM, Hicham Mallah <mallah...@gmail.com<javascript:>

wrote:

Here's the top after ~1 hour running:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
780 root 20 0 317g 14g 7.1g S 492.9 46.4 157:50.89 java


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com <javascript:>
00961 700 49 600

On Thu, Mar 13, 2014 at 2:36 PM, Hicham Mallah <mallah...@gmail.com<javascript:>

wrote:

Hello Jörg

Thanks for the reply, our swap size is 2g. I don't know at what % the
process is being killed as the first time it happened I wasn't around, and
then I never let that happen again as the website is online. After 2 hours
of running the memory in sure is going up to 60%, I am restarting each time
when it arrives at 70% (2h/2h30) when I am around and testing config
changes. When I am not around, I am setting a cron job to restart the
server every 2 hours. Server has apache and mysql running on it too.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com <javascript:>
00961 700 49 600

On Thu, Mar 13, 2014 at 2:22 PM, joerg...@gmail.com <javascript:> <
joerg...@gmail.com <javascript:>> wrote:

You wrote, the OOM killer killed the ES process. With 32g (and the swap
size), the process must be very big. much more than you configured. Can you
give more info about the live size of the process, after ~2 hours? Are
there more application processes on the box?

Jörg

On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah <mallah...@gmail.com<javascript:>

wrote:

Hello,

I have been using elasticsearch on a ubuntu server for a year now, and
everything was going great. I had an index of 150,000,000 entries of domain
names, running small queries on it, just filtering by 1 term no sorting no
wildcard nothing. Now we moved servers, I have now a CentOS 6 server, 32GB
ram and running elasticserach but now we have 2 indices, of about 150
million entries each 32 shards, still running the same queries on them
nothing changed in the queries. But since we went online with the new
server, I have to restart elasticsearch every 2 hours before OOM killer
kills it.

What's happening is that elasticsearch starts using memory till 50%
then it goes back down to about 30% gradually then starts to go up again
gradually and never goes back down.

I have tried all the solutions I found on the net, I am a developer
not a server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.13=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",
"build_timestamp" : "2014-02-12T16:18:34Z",
"build_snapshot" : false,
"lucene_version" : "4.6"
}

Using elastica PHP

I have tried playing with values up and down to try to make it work,
but nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hello Zachary,

Thanks for your reply and the pointer to the settings.

Here are the output of the commands you requested:

curl -XGET "http://localhost:9200/_nodes/stats"
curl -XGET "http://localhost:9200/_nodes"


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 3:57 PM, Zachary Tong zacharyjtong@gmail.comwrote:

Can you gist up the output of these two commands?

curl -XGET "http://localhost:9200/_nodes/stats"

curl -XGET "http://localhost:9200/_nodes"

Those are my first-stop APIs for determining where memory is being
allocated.

By the way, these settings don't do anything anymore (they were
depreciated and removed):

index.cache.field.type: soft
index.term_index_interval: 256
index.term_index_divisor: 5

index.cache.field.max_size: 10000

max_size was replaced with indices.fielddata.cache.size and accepts a
value like "10gb" or "30%"

And this is just bad settings in general (causes a lot of GC thrashing):

index.cache.field.expire: 10m

On Thursday, March 13, 2014 8:42:54 AM UTC-4, Hicham Mallah wrote:

Now the process went back down to 25% usage, from now on it will go back
up, and won't stop going up.

Sorry for spamming


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:37 PM, Hicham Mallah mallah...@gmail.comwrote:

Here's the top after ~1 hour running:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
780 root 20 0 317g 14g 7.1g S 492.9 46.4 157:50.89 java


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:36 PM, Hicham Mallah mallah...@gmail.comwrote:

Hello Jörg

Thanks for the reply, our swap size is 2g. I don't know at what % the
process is being killed as the first time it happened I wasn't around, and
then I never let that happen again as the website is online. After 2 hours
of running the memory in sure is going up to 60%, I am restarting each time
when it arrives at 70% (2h/2h30) when I am around and testing config
changes. When I am not around, I am setting a cron job to restart the
server every 2 hours. Server has apache and mysql running on it too.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:22 PM, joerg...@gmail.com <joerg...@gmail.com

wrote:

You wrote, the OOM killer killed the ES process. With 32g (and the
swap size), the process must be very big. much more than you configured.
Can you give more info about the live size of the process, after ~2 hours?
Are there more application processes on the box?

Jörg

On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah mallah...@gmail.comwrote:

Hello,

I have been using elasticsearch on a ubuntu server for a year now,
and everything was going great. I had an index of 150,000,000 entries of
domain names, running small queries on it, just filtering by 1 term no
sorting no wildcard nothing. Now we moved servers, I have now a CentOS 6
server, 32GB ram and running elasticserach but now we have 2 indices, of
about 150 million entries each 32 shards, still running the same queries on
them nothing changed in the queries. But since we went online with the new
server, I have to restart elasticsearch every 2 hours before OOM killer
kills it.

What's happening is that elasticsearch starts using memory till 50%
then it goes back down to about 30% gradually then starts to go up again
gradually and never goes back down.

I have tried all the solutions I found on the net, I am a developer
not a server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.13=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",
"build_timestamp" : "2014-02-12T16:18:34Z",
"build_snapshot" : false,
"lucene_version" : "4.6"
}

Using elastica PHP

I have tried playing with values up and down to try to make it work,
but nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%
40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%
3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

From the gist, it alls looks very well. There is no reason for the OOM
killer to kick in. Your system is idle and there is much room for
everything.

Just to quote you:

"What's happening is that elasticsearch starts using memory till 50% then
it goes back down to about 30% gradually then starts to go up again
gradually and never goes back down."

What you see is ES JVM process giving back memory to the OS, which is no
reason to worry about in regard to process killing. It is just undesirable
behaviour, and it is all a matter of correct configuration of the heap size.

You should check if your ES starts from service wrapper or from the bin
folder, and adjust the parameters for heap size. I recommend only to use
ES_HEAP_SIZE parameter. Set this to max. 50% RAM (as you did). But do not
use different values at other places, or use MIN or MAX. ES_HEAP_SIZE is
doing the right thing for you.

With bootstrap mlockall, you can lock the ES JVM process into main memory,
this helps much regarding to performance and fast GC, as it reduces
swapping. You can test if this setting will invoke the OOM killer too, as
it increases the pressure on main memory (but, as said, there is plenty
room in your machine).

Jörg

On Thu, Mar 13, 2014 at 3:13 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

Hello Zachary,

Thanks for your reply and the pointer to the settings.

Here are the output of the commands you requested:

curl -XGET "http://localhost:9200/_nodes/stats"
curl -XGET "http://localhost:9200/_nodes"

Elastic Search stats · GitHub


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 3:57 PM, Zachary Tong zacharyjtong@gmail.comwrote:

Can you gist up the output of these two commands?

curl -XGET "http://localhost:9200/_nodes/stats"

curl -XGET "http://localhost:9200/_nodes"

Those are my first-stop APIs for determining where memory is being
allocated.

By the way, these settings don't do anything anymore (they were
depreciated and removed):

index.cache.field.type: soft
index.term_index_interval: 256
index.term_index_divisor: 5

index.cache.field.max_size: 10000

max_size was replaced with indices.fielddata.cache.size and accepts a
value like "10gb" or "30%"

And this is just bad settings in general (causes a lot of GC thrashing):

index.cache.field.expire: 10m

On Thursday, March 13, 2014 8:42:54 AM UTC-4, Hicham Mallah wrote:

Now the process went back down to 25% usage, from now on it will go back
up, and won't stop going up.

Sorry for spamming


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:37 PM, Hicham Mallah mallah...@gmail.comwrote:

Here's the top after ~1 hour running:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
780 root 20 0 317g 14g 7.1g S 492.9 46.4 157:50.89 java


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:36 PM, Hicham Mallah mallah...@gmail.comwrote:

Hello Jörg

Thanks for the reply, our swap size is 2g. I don't know at what % the
process is being killed as the first time it happened I wasn't around, and
then I never let that happen again as the website is online. After 2 hours
of running the memory in sure is going up to 60%, I am restarting each time
when it arrives at 70% (2h/2h30) when I am around and testing config
changes. When I am not around, I am setting a cron job to restart the
server every 2 hours. Server has apache and mysql running on it too.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:22 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

You wrote, the OOM killer killed the ES process. With 32g (and the
swap size), the process must be very big. much more than you configured.
Can you give more info about the live size of the process, after ~2 hours?
Are there more application processes on the box?

Jörg

On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah mallah...@gmail.comwrote:

Hello,

I have been using elasticsearch on a ubuntu server for a year now,
and everything was going great. I had an index of 150,000,000 entries of
domain names, running small queries on it, just filtering by 1 term no
sorting no wildcard nothing. Now we moved servers, I have now a CentOS 6
server, 32GB ram and running elasticserach but now we have 2 indices, of
about 150 million entries each 32 shards, still running the same queries on
them nothing changed in the queries. But since we went online with the new
server, I have to restart elasticsearch every 2 hours before OOM killer
kills it.

What's happening is that elasticsearch starts using memory till 50%
then it goes back down to about 30% gradually then starts to go up again
gradually and never goes back down.

I have tried all the solutions I found on the net, I am a developer
not a server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.13=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",
"build_timestamp" : "2014-02-12T16:18:34Z",
"build_snapshot" : false,
"lucene_version" : "4.6"
}

Using elastica PHP

I have tried playing with values up and down to try to make it work,
but nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4059bf32-
ae30-45fa-947c-98ef4540920a%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%
3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Jorg the issue is after the JVM giving back memory to the OS, it starts
going up again, and never gives back memory till its killed, currently
memory usage is up to 66% and still going up. HEAP size is currently set to
8gb which is 1/4 the amount of memory I have. I tried it at 16, 12, now at
8 but still facing the issue, lowering it more will cause undesirable speed
on the website. I'll try mlockall now, and see what happens, but looking at
Bigdesk on 18.6mb of swap is used.

I'll let you know what happens with mlockall on.


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:38 PM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

From the gist, it alls looks very well. There is no reason for the OOM
killer to kick in. Your system is idle and there is much room for
everything.

Just to quote you:

"What's happening is that elasticsearch starts using memory till 50% then
it goes back down to about 30% gradually then starts to go up again
gradually and never goes back down."

What you see is ES JVM process giving back memory to the OS, which is no
reason to worry about in regard to process killing. It is just undesirable
behaviour, and it is all a matter of correct configuration of the heap size.

You should check if your ES starts from service wrapper or from the bin
folder, and adjust the parameters for heap size. I recommend only to use
ES_HEAP_SIZE parameter. Set this to max. 50% RAM (as you did). But do not
use different values at other places, or use MIN or MAX. ES_HEAP_SIZE is
doing the right thing for you.

With bootstrap mlockall, you can lock the ES JVM process into main memory,
this helps much regarding to performance and fast GC, as it reduces
swapping. You can test if this setting will invoke the OOM killer too, as
it increases the pressure on main memory (but, as said, there is plenty
room in your machine).

Jörg

On Thu, Mar 13, 2014 at 3:13 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

Hello Zachary,

Thanks for your reply and the pointer to the settings.

Here are the output of the commands you requested:

curl -XGET "http://localhost:9200/_nodes/stats"
curl -XGET "http://localhost:9200/_nodes"

Elastic Search stats · GitHub


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 3:57 PM, Zachary Tong zacharyjtong@gmail.comwrote:

Can you gist up the output of these two commands?

curl -XGET "http://localhost:9200/_nodes/stats"

curl -XGET "http://localhost:9200/_nodes"

Those are my first-stop APIs for determining where memory is being
allocated.

By the way, these settings don't do anything anymore (they were
depreciated and removed):

index.cache.field.type: soft
index.term_index_interval: 256
index.term_index_divisor: 5

index.cache.field.max_size: 10000

max_size was replaced with indices.fielddata.cache.size and accepts
a value like "10gb" or "30%"

And this is just bad settings in general (causes a lot of GC thrashing):

index.cache.field.expire: 10m

On Thursday, March 13, 2014 8:42:54 AM UTC-4, Hicham Mallah wrote:

Now the process went back down to 25% usage, from now on it will go
back up, and won't stop going up.

Sorry for spamming


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:37 PM, Hicham Mallah mallah...@gmail.comwrote:

Here's the top after ~1 hour running:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
780 root 20 0 317g 14g 7.1g S 492.9 46.4 157:50.89 java


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:36 PM, Hicham Mallah mallah...@gmail.comwrote:

Hello Jörg

Thanks for the reply, our swap size is 2g. I don't know at what % the
process is being killed as the first time it happened I wasn't around, and
then I never let that happen again as the website is online. After 2 hours
of running the memory in sure is going up to 60%, I am restarting each time
when it arrives at 70% (2h/2h30) when I am around and testing config
changes. When I am not around, I am setting a cron job to restart the
server every 2 hours. Server has apache and mysql running on it too.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:22 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

You wrote, the OOM killer killed the ES process. With 32g (and the
swap size), the process must be very big. much more than you configured.
Can you give more info about the live size of the process, after ~2 hours?
Are there more application processes on the box?

Jörg

On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah <mallah...@gmail.com

wrote:

Hello,

I have been using elasticsearch on a ubuntu server for a year now,
and everything was going great. I had an index of 150,000,000 entries of
domain names, running small queries on it, just filtering by 1 term no
sorting no wildcard nothing. Now we moved servers, I have now a CentOS 6
server, 32GB ram and running elasticserach but now we have 2 indices, of
about 150 million entries each 32 shards, still running the same queries on
them nothing changed in the queries. But since we went online with the new
server, I have to restart elasticsearch every 2 hours before OOM killer
kills it.

What's happening is that elasticsearch starts using memory till 50%
then it goes back down to about 30% gradually then starts to go up again
gradually and never goes back down.

I have tried all the solutions I found on the net, I am a developer
not a server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.13=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",
"build_timestamp" : "2014-02-12T16:18:34Z",
"build_snapshot" : false,
"lucene_version" : "4.6"
}

Using elastica PHP

I have tried playing with values up and down to try to make it
work, but nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4059bf32-
ae30-45fa-947c-98ef4540920a%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/
CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%
40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn9jNBBzyCdjwQQzZvCvkuZ0gyLNdgTCFnbEWcsUYViQug%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Hello again,

setting bootstrap.mlockall to true seems to have made memory usage slower,
so like at the place of elasticsearch being killed after ~2 hours it will
be killed after ~3 hours. What I see weird, is why is the process releasing
memory one back to the OS but not doing it again? And why is it not abiding
by this DIRECT_SIZE setting too.

Thanks for the help


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:45 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

Jorg the issue is after the JVM giving back memory to the OS, it starts
going up again, and never gives back memory till its killed, currently
memory usage is up to 66% and still going up. HEAP size is currently set to
8gb which is 1/4 the amount of memory I have. I tried it at 16, 12, now at
8 but still facing the issue, lowering it more will cause undesirable speed
on the website. I'll try mlockall now, and see what happens, but looking at
Bigdesk on 18.6mb of swap is used.

I'll let you know what happens with mlockall on.


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:38 PM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

From the gist, it alls looks very well. There is no reason for the OOM
killer to kick in. Your system is idle and there is much room for
everything.

Just to quote you:

"What's happening is that elasticsearch starts using memory till 50% then
it goes back down to about 30% gradually then starts to go up again
gradually and never goes back down."

What you see is ES JVM process giving back memory to the OS, which is no
reason to worry about in regard to process killing. It is just undesirable
behaviour, and it is all a matter of correct configuration of the heap size.

You should check if your ES starts from service wrapper or from the bin
folder, and adjust the parameters for heap size. I recommend only to use
ES_HEAP_SIZE parameter. Set this to max. 50% RAM (as you did). But do not
use different values at other places, or use MIN or MAX. ES_HEAP_SIZE is
doing the right thing for you.

With bootstrap mlockall, you can lock the ES JVM process into main
memory, this helps much regarding to performance and fast GC, as it reduces
swapping. You can test if this setting will invoke the OOM killer too, as
it increases the pressure on main memory (but, as said, there is plenty
room in your machine).

Jörg

On Thu, Mar 13, 2014 at 3:13 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

Hello Zachary,

Thanks for your reply and the pointer to the settings.

Here are the output of the commands you requested:

curl -XGET "http://localhost:9200/_nodes/stats"
curl -XGET "http://localhost:9200/_nodes"

Elastic Search stats · GitHub


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 3:57 PM, Zachary Tong zacharyjtong@gmail.comwrote:

Can you gist up the output of these two commands?

curl -XGET "http://localhost:9200/_nodes/stats"

curl -XGET "http://localhost:9200/_nodes"

Those are my first-stop APIs for determining where memory is being
allocated.

By the way, these settings don't do anything anymore (they were
depreciated and removed):

index.cache.field.type: soft
index.term_index_interval: 256
index.term_index_divisor: 5

index.cache.field.max_size: 10000

max_size was replaced with indices.fielddata.cache.size and accepts
a value like "10gb" or "30%"

And this is just bad settings in general (causes a lot of GC thrashing):

index.cache.field.expire: 10m

On Thursday, March 13, 2014 8:42:54 AM UTC-4, Hicham Mallah wrote:

Now the process went back down to 25% usage, from now on it will go
back up, and won't stop going up.

Sorry for spamming


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:37 PM, Hicham Mallah mallah...@gmail.comwrote:

Here's the top after ~1 hour running:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
780 root 20 0 317g 14g 7.1g S 492.9 46.4 157:50.89 java


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:36 PM, Hicham Mallah mallah...@gmail.comwrote:

Hello Jörg

Thanks for the reply, our swap size is 2g. I don't know at what %
the process is being killed as the first time it happened I wasn't around,
and then I never let that happen again as the website is online. After 2
hours of running the memory in sure is going up to 60%, I am restarting
each time when it arrives at 70% (2h/2h30) when I am around and testing
config changes. When I am not around, I am setting a cron job to restart
the server every 2 hours. Server has apache and mysql running on it too.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:22 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

You wrote, the OOM killer killed the ES process. With 32g (and the
swap size), the process must be very big. much more than you configured.
Can you give more info about the live size of the process, after ~2 hours?
Are there more application processes on the box?

Jörg

On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello,

I have been using elasticsearch on a ubuntu server for a year now,
and everything was going great. I had an index of 150,000,000 entries of
domain names, running small queries on it, just filtering by 1 term no
sorting no wildcard nothing. Now we moved servers, I have now a CentOS 6
server, 32GB ram and running elasticserach but now we have 2 indices, of
about 150 million entries each 32 shards, still running the same queries on
them nothing changed in the queries. But since we went online with the new
server, I have to restart elasticsearch every 2 hours before OOM killer
kills it.

What's happening is that elasticsearch starts using memory till
50% then it goes back down to about 30% gradually then starts to go up
again gradually and never goes back down.

I have tried all the solutions I found on the net, I am a
developer not a server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.13=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",
"build_timestamp" : "2014-02-12T16:18:34Z",
"build_snapshot" : false,
"lucene_version" : "4.6"
}

Using elastica PHP

I have tried playing with values up and down to try to make it
work, but nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4059bf32-
ae30-45fa-947c-98ef4540920a%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email
to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/
CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%
40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn-HNpespS8roqapVZiz2iZxjWRwXuKYTwgAJsd_8goHiQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

If I start elasticsearch from the bin folder not using the wrapper, I get
these exceptions after about 2 mins:

Exception in thread "elasticsearch[Adam X][generic][T#5]"
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.fst.BytesStore.(BytesStore.java:62)
at org.apache.lucene.util.fst.FST.(FST.java:366)
at org.apache.lucene.util.fst.FST.(FST.java:301)
at
org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader.(BlockTreeTermsReader.java:481)
at
org.apache.lucene.codecs.BlockTreeTermsReader.(BlockTreeTermsReader.java:175)
at
org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:437)
at
org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat$BloomFilteredFieldsProducer.(BloomFilterPostingsFormat.java:131)
at
org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat.fieldsProducer(BloomFilterPostingsFormat.java:102)
at
org.elasticsearch.index.codec.postingsformat.Elasticsearch090PostingsFormat.fieldsProducer(Elasticsearch090PostingsFormat.java:79)
at
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:195)
at
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:244)
at
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:115)
at
org.apache.lucene.index.SegmentReader.(SegmentReader.java:95)
at
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:141)
at
org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:235)
at
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:100)
at
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:382)
at
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:111)
at
org.apache.lucene.search.XSearcherManager.(XSearcherManager.java:94)
at
org.elasticsearch.index.engine.internal.InternalEngine.buildSearchManager(InternalEngine.java:1462)
at
org.elasticsearch.index.engine.internal.InternalEngine.start(InternalEngine.java:279)
at
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryPrepareForTranslog(InternalIndexShard.java:706)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:201)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:189)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 6:47 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

Hello again,

setting bootstrap.mlockall to true seems to have made memory usage slower,
so like at the place of elasticsearch being killed after ~2 hours it will
be killed after ~3 hours. What I see weird, is why is the process releasing
memory one back to the OS but not doing it again? And why is it not abiding
by this DIRECT_SIZE setting too.

Thanks for the help


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:45 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

Jorg the issue is after the JVM giving back memory to the OS, it starts
going up again, and never gives back memory till its killed, currently
memory usage is up to 66% and still going up. HEAP size is currently set to
8gb which is 1/4 the amount of memory I have. I tried it at 16, 12, now at
8 but still facing the issue, lowering it more will cause undesirable speed
on the website. I'll try mlockall now, and see what happens, but looking at
Bigdesk on 18.6mb of swap is used.

I'll let you know what happens with mlockall on.


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:38 PM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

From the gist, it alls looks very well. There is no reason for the OOM
killer to kick in. Your system is idle and there is much room for
everything.

Just to quote you:

"What's happening is that elasticsearch starts using memory till 50%
then it goes back down to about 30% gradually then starts to go up again
gradually and never goes back down."

What you see is ES JVM process giving back memory to the OS, which is no
reason to worry about in regard to process killing. It is just undesirable
behaviour, and it is all a matter of correct configuration of the heap size.

You should check if your ES starts from service wrapper or from the bin
folder, and adjust the parameters for heap size. I recommend only to use
ES_HEAP_SIZE parameter. Set this to max. 50% RAM (as you did). But do not
use different values at other places, or use MIN or MAX. ES_HEAP_SIZE is
doing the right thing for you.

With bootstrap mlockall, you can lock the ES JVM process into main
memory, this helps much regarding to performance and fast GC, as it reduces
swapping. You can test if this setting will invoke the OOM killer too, as
it increases the pressure on main memory (but, as said, there is plenty
room in your machine).

Jörg

On Thu, Mar 13, 2014 at 3:13 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

Hello Zachary,

Thanks for your reply and the pointer to the settings.

Here are the output of the commands you requested:

curl -XGET "http://localhost:9200/_nodes/stats"
curl -XGET "http://localhost:9200/_nodes"

Elastic Search stats · GitHub


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 3:57 PM, Zachary Tong zacharyjtong@gmail.comwrote:

Can you gist up the output of these two commands?

curl -XGET "http://localhost:9200/_nodes/stats"

curl -XGET "http://localhost:9200/_nodes"

Those are my first-stop APIs for determining where memory is being
allocated.

By the way, these settings don't do anything anymore (they were
depreciated and removed):

index.cache.field.type: soft
index.term_index_interval: 256
index.term_index_divisor: 5

index.cache.field.max_size: 10000

max_size was replaced with indices.fielddata.cache.size and
accepts a value like "10gb" or "30%"

And this is just bad settings in general (causes a lot of GC
thrashing):

index.cache.field.expire: 10m

On Thursday, March 13, 2014 8:42:54 AM UTC-4, Hicham Mallah wrote:

Now the process went back down to 25% usage, from now on it will go
back up, and won't stop going up.

Sorry for spamming


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:37 PM, Hicham Mallah mallah...@gmail.comwrote:

Here's the top after ~1 hour running:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
780 root 20 0 317g 14g 7.1g S 492.9 46.4 157:50.89 java


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:36 PM, Hicham Mallah mallah...@gmail.comwrote:

Hello Jörg

Thanks for the reply, our swap size is 2g. I don't know at what %
the process is being killed as the first time it happened I wasn't around,
and then I never let that happen again as the website is online. After 2
hours of running the memory in sure is going up to 60%, I am restarting
each time when it arrives at 70% (2h/2h30) when I am around and testing
config changes. When I am not around, I am setting a cron job to restart
the server every 2 hours. Server has apache and mysql running on it too.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:22 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

You wrote, the OOM killer killed the ES process. With 32g (and the
swap size), the process must be very big. much more than you configured.
Can you give more info about the live size of the process, after ~2 hours?
Are there more application processes on the box?

Jörg

On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello,

I have been using elasticsearch on a ubuntu server for a year
now, and everything was going great. I had an index of 150,000,000 entries
of domain names, running small queries on it, just filtering by 1 term no
sorting no wildcard nothing. Now we moved servers, I have now a CentOS 6
server, 32GB ram and running elasticserach but now we have 2 indices, of
about 150 million entries each 32 shards, still running the same queries on
them nothing changed in the queries. But since we went online with the new
server, I have to restart elasticsearch every 2 hours before OOM killer
kills it.

What's happening is that elasticsearch starts using memory till
50% then it goes back down to about 30% gradually then starts to go up
again gradually and never goes back down.

I have tried all the solutions I found on the net, I am a
developer not a server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.13=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",
"build_timestamp" : "2014-02-12T16:18:34Z",
"build_snapshot" : false,
"lucene_version" : "4.6"
}

Using elastica PHP

I have tried playing with values up and down to try to make it
work, but nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to the
Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4059bf32-
ae30-45fa-947c-98ef4540920a%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email
to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/
CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%
40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7sqmXP9v1HgiTcTFXE6DxtQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

There might be massive bloom cache loading for the Lucene codec. My
suggestion is to disable it. Try start ES nodes with

index:
codec:
bloom:
load: false

Bloom cache does not seem to fit perfectly into the diagnostics as you
described, that is just from the exception you sent.

Jörg

On Thu, Mar 13, 2014 at 6:01 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

If I start elasticsearch from the bin folder not using the wrapper, I get
these exceptions after about 2 mins:

Exception in thread "elasticsearch[Adam X][generic][T#5]"
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.fst.BytesStore.(BytesStore.java:62)
at org.apache.lucene.util.fst.FST.(FST.java:366)
at org.apache.lucene.util.fst.FST.(FST.java:301)
at
org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader.(BlockTreeTermsReader.java:481)
at
org.apache.lucene.codecs.BlockTreeTermsReader.(BlockTreeTermsReader.java:175)
at
org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:437)
at
org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat$BloomFilteredFieldsProducer.(BloomFilterPostingsFormat.java:131)
at
org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat.fieldsProducer(BloomFilterPostingsFormat.java:102)
at
org.elasticsearch.index.codec.postingsformat.Elasticsearch090PostingsFormat.fieldsProducer(Elasticsearch090PostingsFormat.java:79)
at
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:195)
at
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:244)
at
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:115)
at
org.apache.lucene.index.SegmentReader.(SegmentReader.java:95)
at
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:141)
at
org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:235)
at
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:100)
at
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:382)
at
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:111)
at
org.apache.lucene.search.XSearcherManager.(XSearcherManager.java:94)
at
org.elasticsearch.index.engine.internal.InternalEngine.buildSearchManager(InternalEngine.java:1462)
at
org.elasticsearch.index.engine.internal.InternalEngine.start(InternalEngine.java:279)
at
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryPrepareForTranslog(InternalIndexShard.java:706)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:201)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:189)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 6:47 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

Hello again,

setting bootstrap.mlockall to true seems to have made memory usage
slower, so like at the place of elasticsearch being killed after ~2 hours
it will be killed after ~3 hours. What I see weird, is why is the process
releasing memory one back to the OS but not doing it again? And why is it
not abiding by this DIRECT_SIZE setting too.

Thanks for the help


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:45 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

Jorg the issue is after the JVM giving back memory to the OS, it starts
going up again, and never gives back memory till its killed, currently
memory usage is up to 66% and still going up. HEAP size is currently set to
8gb which is 1/4 the amount of memory I have. I tried it at 16, 12, now at
8 but still facing the issue, lowering it more will cause undesirable speed
on the website. I'll try mlockall now, and see what happens, but looking at
Bigdesk on 18.6mb of swap is used.

I'll let you know what happens with mlockall on.


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:38 PM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

From the gist, it alls looks very well. There is no reason for the OOM
killer to kick in. Your system is idle and there is much room for
everything.

Just to quote you:

"What's happening is that elasticsearch starts using memory till 50%
then it goes back down to about 30% gradually then starts to go up again
gradually and never goes back down."

What you see is ES JVM process giving back memory to the OS, which is
no reason to worry about in regard to process killing. It is just
undesirable behaviour, and it is all a matter of correct configuration of
the heap size.

You should check if your ES starts from service wrapper or from the bin
folder, and adjust the parameters for heap size. I recommend only to use
ES_HEAP_SIZE parameter. Set this to max. 50% RAM (as you did). But do not
use different values at other places, or use MIN or MAX. ES_HEAP_SIZE is
doing the right thing for you.

With bootstrap mlockall, you can lock the ES JVM process into main
memory, this helps much regarding to performance and fast GC, as it reduces
swapping. You can test if this setting will invoke the OOM killer too, as
it increases the pressure on main memory (but, as said, there is plenty
room in your machine).

Jörg

On Thu, Mar 13, 2014 at 3:13 PM, Hicham Mallah <mallah.hicham@gmail.com

wrote:

Hello Zachary,

Thanks for your reply and the pointer to the settings.

Here are the output of the commands you requested:

curl -XGET "http://localhost:9200/_nodes/stats"
curl -XGET "http://localhost:9200/_nodes"

Elastic Search stats · GitHub


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 3:57 PM, Zachary Tong zacharyjtong@gmail.comwrote:

Can you gist up the output of these two commands?

curl -XGET "http://localhost:9200/_nodes/stats"

curl -XGET "http://localhost:9200/_nodes"

Those are my first-stop APIs for determining where memory is being
allocated.

By the way, these settings don't do anything anymore (they were
depreciated and removed):

index.cache.field.type: soft
index.term_index_interval: 256
index.term_index_divisor: 5

index.cache.field.max_size: 10000

max_size was replaced with indices.fielddata.cache.size and
accepts a value like "10gb" or "30%"

And this is just bad settings in general (causes a lot of GC
thrashing):

index.cache.field.expire: 10m

On Thursday, March 13, 2014 8:42:54 AM UTC-4, Hicham Mallah wrote:

Now the process went back down to 25% usage, from now on it will go
back up, and won't stop going up.

Sorry for spamming


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:37 PM, Hicham Mallah mallah...@gmail.comwrote:

Here's the top after ~1 hour running:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
780 root 20 0 317g 14g 7.1g S 492.9 46.4 157:50.89 java


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:36 PM, Hicham Mallah <mallah...@gmail.com

wrote:

Hello Jörg

Thanks for the reply, our swap size is 2g. I don't know at what %
the process is being killed as the first time it happened I wasn't around,
and then I never let that happen again as the website is online. After 2
hours of running the memory in sure is going up to 60%, I am restarting
each time when it arrives at 70% (2h/2h30) when I am around and testing
config changes. When I am not around, I am setting a cron job to restart
the server every 2 hours. Server has apache and mysql running on it too.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:22 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

You wrote, the OOM killer killed the ES process. With 32g (and
the swap size), the process must be very big. much more than you
configured. Can you give more info about the live size of the process,
after ~2 hours? Are there more application processes on the box?

Jörg

On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello,

I have been using elasticsearch on a ubuntu server for a year
now, and everything was going great. I had an index of 150,000,000 entries
of domain names, running small queries on it, just filtering by 1 term no
sorting no wildcard nothing. Now we moved servers, I have now a CentOS 6
server, 32GB ram and running elasticserach but now we have 2 indices, of
about 150 million entries each 32 shards, still running the same queries on
them nothing changed in the queries. But since we went online with the new
server, I have to restart elasticsearch every 2 hours before OOM killer
kills it.

What's happening is that elasticsearch starts using memory till
50% then it goes back down to about 30% gradually then starts to go up
again gradually and never goes back down.

I have tried all the solutions I found on the net, I am a
developer not a server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:CMSInitiatingOccupancyFraction=75

wrapper.java.additional.13=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",
"build_timestamp" : "2014-02-12T16:18:34Z",
"build_snapshot" : false,
"lucene_version" : "4.6"
}

Using elastica PHP

I have tried playing with values up and down to try to make it
work, but nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to the
Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from
it, send an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4059bf32-
ae30-45fa-947c-98ef4540920a%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic
in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/
D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email
to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/
CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%
40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7sqmXP9v1HgiTcTFXE6DxtQ%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7sqmXP9v1HgiTcTFXE6DxtQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGz%3D2ri3521YCwFxxLu_BkjMRoZ7r1B5L1fkCCLzL_vBw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Added index.codec.bloom.load: false to the elasticsearch.yml, doesn't seem
to have changed anything.

It is at 63% after 2 hours and a half up time.

Watching stuff on Bigdesk everything seems to be normal:

Memory:
Committed: 7.8gb
Used: 4.5gb

The used is going up and down normally, so heap is being cleaned no?

So it is working as expected, can't find anything, could it be Oracle Java,
should I try using OpenJDK at the place?!

Really thankful for you guys trying to help me


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 7:23 PM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

There might be massive bloom cache loading for the Lucene codec. My
suggestion is to disable it. Try start ES nodes with

index:
codec:
bloom:
load: false

Bloom cache does not seem to fit perfectly into the diagnostics as you
described, that is just from the exception you sent.

Jörg

On Thu, Mar 13, 2014 at 6:01 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

If I start elasticsearch from the bin folder not using the wrapper, I get
these exceptions after about 2 mins:

Exception in thread "elasticsearch[Adam X][generic][T#5]"
java.lang.OutOfMemoryError: Java heap space
at
org.apache.lucene.util.fst.BytesStore.(BytesStore.java:62)
at org.apache.lucene.util.fst.FST.(FST.java:366)
at org.apache.lucene.util.fst.FST.(FST.java:301)
at
org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader.(BlockTreeTermsReader.java:481)
at
org.apache.lucene.codecs.BlockTreeTermsReader.(BlockTreeTermsReader.java:175)
at
org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:437)
at
org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat$BloomFilteredFieldsProducer.(BloomFilterPostingsFormat.java:131)
at
org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat.fieldsProducer(BloomFilterPostingsFormat.java:102)
at
org.elasticsearch.index.codec.postingsformat.Elasticsearch090PostingsFormat.fieldsProducer(Elasticsearch090PostingsFormat.java:79)
at
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:195)
at
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:244)
at
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:115)
at
org.apache.lucene.index.SegmentReader.(SegmentReader.java:95)
at
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:141)
at
org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:235)
at
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:100)
at
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:382)
at
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:111)
at
org.apache.lucene.search.XSearcherManager.(XSearcherManager.java:94)
at
org.elasticsearch.index.engine.internal.InternalEngine.buildSearchManager(InternalEngine.java:1462)
at
org.elasticsearch.index.engine.internal.InternalEngine.start(InternalEngine.java:279)
at
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryPrepareForTranslog(InternalIndexShard.java:706)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:201)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:189)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 6:47 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

Hello again,

setting bootstrap.mlockall to true seems to have made memory usage
slower, so like at the place of elasticsearch being killed after ~2 hours
it will be killed after ~3 hours. What I see weird, is why is the process
releasing memory one back to the OS but not doing it again? And why is it
not abiding by this DIRECT_SIZE setting too.

Thanks for the help


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:45 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

Jorg the issue is after the JVM giving back memory to the OS, it starts
going up again, and never gives back memory till its killed, currently
memory usage is up to 66% and still going up. HEAP size is currently set to
8gb which is 1/4 the amount of memory I have. I tried it at 16, 12, now at
8 but still facing the issue, lowering it more will cause undesirable speed
on the website. I'll try mlockall now, and see what happens, but looking at
Bigdesk on 18.6mb of swap is used.

I'll let you know what happens with mlockall on.


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:38 PM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

From the gist, it alls looks very well. There is no reason for the OOM
killer to kick in. Your system is idle and there is much room for
everything.

Just to quote you:

"What's happening is that elasticsearch starts using memory till 50%
then it goes back down to about 30% gradually then starts to go up again
gradually and never goes back down."

What you see is ES JVM process giving back memory to the OS, which is
no reason to worry about in regard to process killing. It is just
undesirable behaviour, and it is all a matter of correct configuration of
the heap size.

You should check if your ES starts from service wrapper or from the
bin folder, and adjust the parameters for heap size. I recommend only to
use ES_HEAP_SIZE parameter. Set this to max. 50% RAM (as you did). But do
not use different values at other places, or use MIN or MAX. ES_HEAP_SIZE
is doing the right thing for you.

With bootstrap mlockall, you can lock the ES JVM process into main
memory, this helps much regarding to performance and fast GC, as it reduces
swapping. You can test if this setting will invoke the OOM killer too, as
it increases the pressure on main memory (but, as said, there is plenty
room in your machine).

Jörg

On Thu, Mar 13, 2014 at 3:13 PM, Hicham Mallah <
mallah.hicham@gmail.com> wrote:

Hello Zachary,

Thanks for your reply and the pointer to the settings.

Here are the output of the commands you requested:

curl -XGET "http://localhost:9200/_nodes/stats"
curl -XGET "http://localhost:9200/_nodes"

Elastic Search stats · GitHub


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 3:57 PM, Zachary Tong <zacharyjtong@gmail.com

wrote:

Can you gist up the output of these two commands?

curl -XGET "http://localhost:9200/_nodes/stats"

curl -XGET "http://localhost:9200/_nodes"

Those are my first-stop APIs for determining where memory is being
allocated.

By the way, these settings don't do anything anymore (they were
depreciated and removed):

index.cache.field.type: soft
index.term_index_interval: 256
index.term_index_divisor: 5

index.cache.field.max_size: 10000

max_size was replaced with indices.fielddata.cache.size and
accepts a value like "10gb" or "30%"

And this is just bad settings in general (causes a lot of GC
thrashing):

index.cache.field.expire: 10m

On Thursday, March 13, 2014 8:42:54 AM UTC-4, Hicham Mallah wrote:

Now the process went back down to 25% usage, from now on it will go
back up, and won't stop going up.

Sorry for spamming


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:37 PM, Hicham Mallah <mallah...@gmail.com

wrote:

Here's the top after ~1 hour running:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
780 root 20 0 317g 14g 7.1g S 492.9 46.4 157:50.89 java


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:36 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello Jörg

Thanks for the reply, our swap size is 2g. I don't know at what %
the process is being killed as the first time it happened I wasn't around,
and then I never let that happen again as the website is online. After 2
hours of running the memory in sure is going up to 60%, I am restarting
each time when it arrives at 70% (2h/2h30) when I am around and testing
config changes. When I am not around, I am setting a cron job to restart
the server every 2 hours. Server has apache and mysql running on it too.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:22 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

You wrote, the OOM killer killed the ES process. With 32g (and
the swap size), the process must be very big. much more than you
configured. Can you give more info about the live size of the process,
after ~2 hours? Are there more application processes on the box?

Jörg

On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello,

I have been using elasticsearch on a ubuntu server for a year
now, and everything was going great. I had an index of 150,000,000 entries
of domain names, running small queries on it, just filtering by 1 term no
sorting no wildcard nothing. Now we moved servers, I have now a CentOS 6
server, 32GB ram and running elasticserach but now we have 2 indices, of
about 150 million entries each 32 shards, still running the same queries on
them nothing changed in the queries. But since we went online with the new
server, I have to restart elasticsearch every 2 hours before OOM killer
kills it.

What's happening is that elasticsearch starts using memory till
50% then it goes back down to about 30% gradually then starts to go up
again gradually and never goes back down.

I have tried all the solutions I found on the net, I am a
developer not a server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:CMSInitiatingOccupancyFraction=75

wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:CMSInitiatingOccupancyFraction=75

wrapper.java.additional.13=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",
"build_timestamp" : "2014-02-12T16:18:34Z",
"build_snapshot" : false,
"lucene_version" : "4.6"
}

Using elastica PHP

I have tried playing with values up and down to try to make it
work, but nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to the
Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from
it, send an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4059bf32-
ae30-45fa-947c-98ef4540920a%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic
in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/
D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an
email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/
CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%
40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7sqmXP9v1HgiTcTFXE6DxtQ%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7sqmXP9v1HgiTcTFXE6DxtQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGz%3D2ri3521YCwFxxLu_BkjMRoZ7r1B5L1fkCCLzL_vBw%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoGz%3D2ri3521YCwFxxLu_BkjMRoZ7r1B5L1fkCCLzL_vBw%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn9qa19Zx1dBd40FOjqj9LzDqBwCoay9vqyBSvZdxW6JUA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Yeah, your heap looks fine. I'm inclined to believe that the JVM itself is
crashing, as you suggest. There is at least one known fatal bug in recent
versions of the JVM which directly impacts Lucene/Elasticsearch:

https://bugs.openjdk.java.net/browse/JDK-8024830
https://issues.apache.org/jira/browse/LUCENE-5212

The currently recommended version for ES is Java 1.7.0_u25. Try
downgrading to that and see if it helps. Sorry, I should have noticed your
JVM version earlier and made the suggestion...totally slipped by me!

-Zach

On Thursday, March 13, 2014 4:41:25 PM UTC-4, Hicham Mallah wrote:

Added index.codec.bloom.load: false to the elasticsearch.yml, doesn't seem
to have changed anything.

It is at 63% after 2 hours and a half up time.

Watching stuff on Bigdesk everything seems to be normal:

Memory:
Committed: 7.8gb
Used: 4.5gb

The used is going up and down normally, so heap is being cleaned no?

So it is working as expected, can't find anything, could it be Oracle
Java, should I try using OpenJDK at the place?!

Really thankful for you guys trying to help me


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com <javascript:>
00961 700 49 600

On Thu, Mar 13, 2014 at 7:23 PM, joerg...@gmail.com <javascript:> <
joerg...@gmail.com <javascript:>> wrote:

There might be massive bloom cache loading for the Lucene codec. My
suggestion is to disable it. Try start ES nodes with

index:
codec:
bloom:
load: false

Bloom cache does not seem to fit perfectly into the diagnostics as you
described, that is just from the exception you sent.

Jörg

On Thu, Mar 13, 2014 at 6:01 PM, Hicham Mallah <mallah...@gmail.com<javascript:>

wrote:

If I start elasticsearch from the bin folder not using the wrapper, I
get these exceptions after about 2 mins:

Exception in thread "elasticsearch[Adam X][generic][T#5]"
java.lang.OutOfMemoryError: Java heap space
at
org.apache.lucene.util.fst.BytesStore.(BytesStore.java:62)
at org.apache.lucene.util.fst.FST.(FST.java:366)
at org.apache.lucene.util.fst.FST.(FST.java:301)
at
org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader.(BlockTreeTermsReader.java:481)
at
org.apache.lucene.codecs.BlockTreeTermsReader.(BlockTreeTermsReader.java:175)
at
org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:437)
at
org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat$BloomFilteredFieldsProducer.(BloomFilterPostingsFormat.java:131)
at
org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat.fieldsProducer(BloomFilterPostingsFormat.java:102)
at
org.elasticsearch.index.codec.postingsformat.Elasticsearch090PostingsFormat.fieldsProducer(Elasticsearch090PostingsFormat.java:79)
at
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:195)
at
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:244)
at
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:115)
at
org.apache.lucene.index.SegmentReader.(SegmentReader.java:95)
at
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:141)
at
org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:235)
at
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:100)
at
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:382)
at
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:111)
at
org.apache.lucene.search.XSearcherManager.(XSearcherManager.java:94)
at
org.elasticsearch.index.engine.internal.InternalEngine.buildSearchManager(InternalEngine.java:1462)
at
org.elasticsearch.index.engine.internal.InternalEngine.start(InternalEngine.java:279)
at
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryPrepareForTranslog(InternalIndexShard.java:706)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:201)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:189)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com <javascript:>
00961 700 49 600

On Thu, Mar 13, 2014 at 6:47 PM, Hicham Mallah <mallah...@gmail.com<javascript:>

wrote:

Hello again,

setting bootstrap.mlockall to true seems to have made memory usage
slower, so like at the place of elasticsearch being killed after ~2 hours
it will be killed after ~3 hours. What I see weird, is why is the process
releasing memory one back to the OS but not doing it again? And why is it
not abiding by this DIRECT_SIZE setting too.

Thanks for the help


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com <javascript:>
00961 700 49 600

On Thu, Mar 13, 2014 at 4:45 PM, Hicham Mallah <mallah...@gmail.com<javascript:>

wrote:

Jorg the issue is after the JVM giving back memory to the OS, it
starts going up again, and never gives back memory till its killed,
currently memory usage is up to 66% and still going up. HEAP size is
currently set to 8gb which is 1/4 the amount of memory I have. I tried it
at 16, 12, now at 8 but still facing the issue, lowering it more will cause
undesirable speed on the website. I'll try mlockall now, and see what
happens, but looking at Bigdesk on 18.6mb of swap is used.

I'll let you know what happens with mlockall on.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com <javascript:>
00961 700 49 600

On Thu, Mar 13, 2014 at 4:38 PM, joerg...@gmail.com <javascript:> <
joerg...@gmail.com <javascript:>> wrote:

From the gist, it alls looks very well. There is no reason for the
OOM killer to kick in. Your system is idle and there is much room for
everything.

Just to quote you:

"What's happening is that elasticsearch starts using memory till 50%
then it goes back down to about 30% gradually then starts to go up again
gradually and never goes back down."

What you see is ES JVM process giving back memory to the OS, which is
no reason to worry about in regard to process killing. It is just
undesirable behaviour, and it is all a matter of correct configuration of
the heap size.

You should check if your ES starts from service wrapper or from the
bin folder, and adjust the parameters for heap size. I recommend only to
use ES_HEAP_SIZE parameter. Set this to max. 50% RAM (as you did). But do
not use different values at other places, or use MIN or MAX. ES_HEAP_SIZE
is doing the right thing for you.

With bootstrap mlockall, you can lock the ES JVM process into main
memory, this helps much regarding to performance and fast GC, as it reduces
swapping. You can test if this setting will invoke the OOM killer too, as
it increases the pressure on main memory (but, as said, there is plenty
room in your machine).

Jörg

On Thu, Mar 13, 2014 at 3:13 PM, Hicham Mallah <mallah...@gmail.com<javascript:>

wrote:

Hello Zachary,

Thanks for your reply and the pointer to the settings.

Here are the output of the commands you requested:

curl -XGET "http://localhost:9200/_nodes/stats"
curl -XGET "http://localhost:9200/_nodes"

Elastic Search stats · GitHub


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com <javascript:>
00961 700 49 600

On Thu, Mar 13, 2014 at 3:57 PM, Zachary Tong <zachar...@gmail.com<javascript:>

wrote:

Can you gist up the output of these two commands?

curl -XGET "http://localhost:9200/_nodes/stats"

curl -XGET "http://localhost:9200/_nodes"

Those are my first-stop APIs for determining where memory is being
allocated.

By the way, these settings don't do anything anymore (they were
depreciated and removed):

index.cache.field.type: soft
index.term_index_interval: 256
index.term_index_divisor: 5

index.cache.field.max_size: 10000

max_size was replaced with indices.fielddata.cache.size and
accepts a value like "10gb" or "30%"

And this is just bad settings in general (causes a lot of GC
thrashing):

index.cache.field.expire: 10m

On Thursday, March 13, 2014 8:42:54 AM UTC-4, Hicham Mallah wrote:

Now the process went back down to 25% usage, from now on it will
go back up, and won't stop going up.

Sorry for spamming


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:37 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Here's the top after ~1 hour running:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
780 root 20 0 317g 14g 7.1g S 492.9 46.4 157:50.89 java


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:36 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello Jörg

Thanks for the reply, our swap size is 2g. I don't know at what
% the process is being killed as the first time it happened I wasn't
around, and then I never let that happen again as the website is online.
After 2 hours of running the memory in sure is going up to 60%, I am
restarting each time when it arrives at 70% (2h/2h30) when I am around and
testing config changes. When I am not around, I am setting a cron job to
restart the server every 2 hours. Server has apache and mysql running on it
too.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:22 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

You wrote, the OOM killer killed the ES process. With 32g (and
the swap size), the process must be very big. much more than you
configured. Can you give more info about the live size of the process,
after ~2 hours? Are there more application processes on the box?

Jörg

On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello,

I have been using elasticsearch on a ubuntu server for a year
now, and everything was going great. I had an index of 150,000,000 entries
of domain names, running small queries on it, just filtering by 1 term no
sorting no wildcard nothing. Now we moved servers, I have now a CentOS 6
server, 32GB ram and running elasticserach but now we have 2 indices, of
about 150 million entries each 32 shards, still running the same queries on
them nothing changed in the queries. But since we went online with the new
server, I have to restart elasticsearch every 2 hours before OOM killer
kills it.

What's happening is that elasticsearch starts using memory
till 50% then it goes back down to about 30% gradually then starts to go up
again gradually and never goes back down.

I have tried all the solutions I found on the net, I am a
developer not a server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:CMSInitiatingOccupancyFraction=75

wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:CMSInitiatingOccupancyFraction=75

wrapper.java.additional.13=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed
mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",

"build_timestamp" : "2014-02-12T16:18:34Z", 
"build_snapshot" : false, 
"lucene_version" : "4.6" 

}

Using elastica PHP

I have tried playing with values up and down to try to make it
work, but nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to the
Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from
it, send an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4059bf32-
ae30-45fa-947c-98ef4540920a%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic
in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/
D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an
email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/
CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%
40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe
.
To unsubscribe from this group and all its topics, send an email
to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7sqmXP9v1HgiTcTFXE6DxtQ%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7sqmXP9v1HgiTcTFXE6DxtQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGz%3D2ri3521YCwFxxLu_BkjMRoZ7r1B5L1fkCCLzL_vBw%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoGz%3D2ri3521YCwFxxLu_BkjMRoZ7r1B5L1fkCCLzL_vBw%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/5be3c4e9-e01b-4562-95e1-c51b2d383ca8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Also, can you confirm that it was actually the OOM killer nuking the
process? That will help us to determine if it was OOM killer or a JVM
crash.

OOM killer will log something through dmesg/syslog/etc, showing the OOM
killer killing 'java' and the pid (and some other info I think)

On Thursday, March 13, 2014 7:54:55 PM UTC-4, Zachary Tong wrote:

Yeah, your heap looks fine. I'm inclined to believe that the JVM itself
is crashing, as you suggest. There is at least one known fatal bug in
recent versions of the JVM which directly impacts Lucene/Elasticsearch:

Loading...
[LUCENE-5212] java 7u40 causes sigsegv and corrupt term vectors - ASF JIRA

The currently recommended version for ES is Java 1.7.0_u25. Try
downgrading to that and see if it helps. Sorry, I should have noticed your
JVM version earlier and made the suggestion...totally slipped by me!

-Zach

On Thursday, March 13, 2014 4:41:25 PM UTC-4, Hicham Mallah wrote:

Added index.codec.bloom.load: false to the elasticsearch.yml, doesn't
seem to have changed anything.

It is at 63% after 2 hours and a half up time.

Watching stuff on Bigdesk everything seems to be normal:

Memory:
Committed: 7.8gb
Used: 4.5gb

The used is going up and down normally, so heap is being cleaned no?

So it is working as expected, can't find anything, could it be Oracle
Java, should I try using OpenJDK at the place?!

Really thankful for you guys trying to help me


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 7:23 PM, joerg...@gmail.com joerg...@gmail.comwrote:

There might be massive bloom cache loading for the Lucene codec. My
suggestion is to disable it. Try start ES nodes with

index:
codec:
bloom:
load: false

Bloom cache does not seem to fit perfectly into the diagnostics as you
described, that is just from the exception you sent.

Jörg

On Thu, Mar 13, 2014 at 6:01 PM, Hicham Mallah mallah...@gmail.comwrote:

If I start elasticsearch from the bin folder not using the wrapper, I
get these exceptions after about 2 mins:

Exception in thread "elasticsearch[Adam X][generic][T#5]"
java.lang.OutOfMemoryError: Java heap space
at
org.apache.lucene.util.fst.BytesStore.(BytesStore.java:62)
at org.apache.lucene.util.fst.FST.(FST.java:366)
at org.apache.lucene.util.fst.FST.(FST.java:301)
at
org.apache.lucene.codecs.BlockTreeTermsReader$FieldReader.(BlockTreeTermsReader.java:481)
at
org.apache.lucene.codecs.BlockTreeTermsReader.(BlockTreeTermsReader.java:175)
at
org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:437)
at
org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat$BloomFilteredFieldsProducer.(BloomFilterPostingsFormat.java:131)
at
org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat.fieldsProducer(BloomFilterPostingsFormat.java:102)
at
org.elasticsearch.index.codec.postingsformat.Elasticsearch090PostingsFormat.fieldsProducer(Elasticsearch090PostingsFormat.java:79)
at
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:195)
at
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:244)
at
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:115)
at
org.apache.lucene.index.SegmentReader.(SegmentReader.java:95)
at
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:141)
at
org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:235)
at
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:100)
at
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:382)
at
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:111)
at
org.apache.lucene.search.XSearcherManager.(XSearcherManager.java:94)
at
org.elasticsearch.index.engine.internal.InternalEngine.buildSearchManager(InternalEngine.java:1462)
at
org.elasticsearch.index.engine.internal.InternalEngine.start(InternalEngine.java:279)
at
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryPrepareForTranslog(InternalIndexShard.java:706)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:201)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:189)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 6:47 PM, Hicham Mallah mallah...@gmail.comwrote:

Hello again,

setting bootstrap.mlockall to true seems to have made memory usage
slower, so like at the place of elasticsearch being killed after ~2 hours
it will be killed after ~3 hours. What I see weird, is why is the process
releasing memory one back to the OS but not doing it again? And why is it
not abiding by this DIRECT_SIZE setting too.

Thanks for the help


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:45 PM, Hicham Mallah mallah...@gmail.comwrote:

Jorg the issue is after the JVM giving back memory to the OS, it
starts going up again, and never gives back memory till its killed,
currently memory usage is up to 66% and still going up. HEAP size is
currently set to 8gb which is 1/4 the amount of memory I have. I tried it
at 16, 12, now at 8 but still facing the issue, lowering it more will cause
undesirable speed on the website. I'll try mlockall now, and see what
happens, but looking at Bigdesk on 18.6mb of swap is used.

I'll let you know what happens with mlockall on.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:38 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

From the gist, it alls looks very well. There is no reason for the
OOM killer to kick in. Your system is idle and there is much room for
everything.

Just to quote you:

"What's happening is that elasticsearch starts using memory till 50%
then it goes back down to about 30% gradually then starts to go up again
gradually and never goes back down."

What you see is ES JVM process giving back memory to the OS, which
is no reason to worry about in regard to process killing. It is just
undesirable behaviour, and it is all a matter of correct configuration of
the heap size.

You should check if your ES starts from service wrapper or from the
bin folder, and adjust the parameters for heap size. I recommend only to
use ES_HEAP_SIZE parameter. Set this to max. 50% RAM (as you did). But do
not use different values at other places, or use MIN or MAX. ES_HEAP_SIZE
is doing the right thing for you.

With bootstrap mlockall, you can lock the ES JVM process into main
memory, this helps much regarding to performance and fast GC, as it reduces
swapping. You can test if this setting will invoke the OOM killer too, as
it increases the pressure on main memory (but, as said, there is plenty
room in your machine).

Jörg

On Thu, Mar 13, 2014 at 3:13 PM, Hicham Mallah mallah...@gmail.comwrote:

Hello Zachary,

Thanks for your reply and the pointer to the settings.

Here are the output of the commands you requested:

curl -XGET "http://localhost:9200/_nodes/stats"
curl -XGET "http://localhost:9200/_nodes"

Elastic Search stats · GitHub


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 3:57 PM, Zachary Tong zachar...@gmail.comwrote:

Can you gist up the output of these two commands?

curl -XGET "http://localhost:9200/_nodes/stats"

curl -XGET "http://localhost:9200/_nodes"

Those are my first-stop APIs for determining where memory is being
allocated.

By the way, these settings don't do anything anymore (they were
depreciated and removed):

index.cache.field.type: soft
index.term_index_interval: 256
index.term_index_divisor: 5

index.cache.field.max_size: 10000

max_size was replaced with indices.fielddata.cache.size and
accepts a value like "10gb" or "30%"

And this is just bad settings in general (causes a lot of GC
thrashing):

index.cache.field.expire: 10m

On Thursday, March 13, 2014 8:42:54 AM UTC-4, Hicham Mallah wrote:

Now the process went back down to 25% usage, from now on it will
go back up, and won't stop going up.

Sorry for spamming


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:37 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Here's the top after ~1 hour running:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
780 root 20 0 317g 14g 7.1g S 492.9 46.4 157:50.89 java


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:36 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello Jörg

Thanks for the reply, our swap size is 2g. I don't know at what
% the process is being killed as the first time it happened I wasn't
around, and then I never let that happen again as the website is online.
After 2 hours of running the memory in sure is going up to 60%, I am
restarting each time when it arrives at 70% (2h/2h30) when I am around and
testing config changes. When I am not around, I am setting a cron job to
restart the server every 2 hours. Server has apache and mysql running on it
too.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:22 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

You wrote, the OOM killer killed the ES process. With 32g (and
the swap size), the process must be very big. much more than you
configured. Can you give more info about the live size of the process,
after ~2 hours? Are there more application processes on the box?

Jörg

On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello,

I have been using elasticsearch on a ubuntu server for a year
now, and everything was going great. I had an index of 150,000,000 entries
of domain names, running small queries on it, just filtering by 1 term no
sorting no wildcard nothing. Now we moved servers, I have now a CentOS 6
server, 32GB ram and running elasticserach but now we have 2 indices, of
about 150 million entries each 32 shards, still running the same queries on
them nothing changed in the queries. But since we went online with the new
server, I have to restart elasticsearch every 2 hours before OOM killer
kills it.

What's happening is that elasticsearch starts using memory
till 50% then it goes back down to about 30% gradually then starts to go up
again gradually and never goes back down.

I have tried all the solutions I found on the net, I am a
developer not a server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:CMSInitiatingOccupancyFraction=75

wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:CMSInitiatingOccupancyFraction=75

wrapper.java.additional.13=-XX:+UseCMSInitiatingOccupancyOnly

wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed
mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",

"build_timestamp" : "2014-02-12T16:18:34Z", 
"build_snapshot" : false, 
"lucene_version" : "4.6" 

}

Using elastica PHP

I have tried playing with values up and down to try to make
it work, but nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to the
Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from
it, send an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4059bf32-
ae30-45fa-947c-98ef4540920a%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a
topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/
D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an
email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/
CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%
40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe
.
To unsubscribe from this group and all its topics, send an email
to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7sqmXP9v1HgiTcTFXE6DxtQ%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7sqmXP9v1HgiTcTFXE6DxtQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGz%3D2ri3521YCwFxxLu_BkjMRoZ7r1B5L1fkCCLzL_vBw%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoGz%3D2ri3521YCwFxxLu_BkjMRoZ7r1B5L1fkCCLzL_vBw%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/8c65318b-e898-4ec2-9715-c2c2677a23ab%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hello again,

Sorry for the late reply, you're right I don't think it is the OOM killer,
I'll be downgrading my JVM to see what will happen.

Will let you know how it goes.

Thanks.


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Fri, Mar 14, 2014 at 2:10 AM, Zachary Tong zacharyjtong@gmail.comwrote:

Also, can you confirm that it was actually the OOM killer nuking the
process? That will help us to determine if it was OOM killer or a JVM
crash.

OOM killer will log something through dmesg/syslog/etc, showing the OOM
killer killing 'java' and the pid (and some other info I think)

On Thursday, March 13, 2014 7:54:55 PM UTC-4, Zachary Tong wrote:

Yeah, your heap looks fine. I'm inclined to believe that the JVM itself
is crashing, as you suggest. There is at least one known fatal bug in
recent versions of the JVM which directly impacts Lucene/Elasticsearch:

Loading...
[LUCENE-5212] java 7u40 causes sigsegv and corrupt term vectors - ASF JIRA

The currently recommended version for ES is Java 1.7.0_u25. Try
downgrading to that and see if it helps. Sorry, I should have noticed your
JVM version earlier and made the suggestion...totally slipped by me!

-Zach

On Thursday, March 13, 2014 4:41:25 PM UTC-4, Hicham Mallah wrote:

Added index.codec.bloom.load: false to the elasticsearch.yml, doesn't
seem to have changed anything.

It is at 63% after 2 hours and a half up time.

Watching stuff on Bigdesk everything seems to be normal:

Memory:
Committed: 7.8gb
Used: 4.5gb

The used is going up and down normally, so heap is being cleaned no?

So it is working as expected, can't find anything, could it be Oracle
Java, should I try using OpenJDK at the place?!

Really thankful for you guys trying to help me


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 7:23 PM, joerg...@gmail.com joerg...@gmail.comwrote:

There might be massive bloom cache loading for the Lucene codec. My
suggestion is to disable it. Try start ES nodes with

index:
codec:
bloom:
load: false

Bloom cache does not seem to fit perfectly into the diagnostics as you
described, that is just from the exception you sent.

Jörg

On Thu, Mar 13, 2014 at 6:01 PM, Hicham Mallah mallah...@gmail.comwrote:

If I start elasticsearch from the bin folder not using the wrapper, I
get these exceptions after about 2 mins:

Exception in thread "elasticsearch[Adam X][generic][T#5]"
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.fst.BytesStore.(BytesStore.
java:62)
at org.apache.lucene.util.fst.FST.(FST.java:366)
at org.apache.lucene.util.fst.FST.(FST.java:301)
at org.apache.lucene.codecs.BlockTreeTermsReader$
FieldReader.(BlockTreeTermsReader.java:481)
at org.apache.lucene.codecs.BlockTreeTermsReader.(
BlockTreeTermsReader.java:175)
at org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.
fieldsProducer(Lucene41PostingsFormat.java:437)
at org.elasticsearch.index.codec.postingsformat.
BloomFilterPostingsFormat$BloomFilteredFieldsProducer.(
BloomFilterPostingsFormat.java:131)
at org.elasticsearch.index.codec.postingsformat.
BloomFilterPostingsFormat.fieldsProducer(BloomFilterPostingsFormat.
java:102)
at org.elasticsearch.index.codec.postingsformat.
Elasticsearch090PostingsFormat.fieldsProducer(
Elasticsearch090PostingsFormat.java:79)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$
FieldsReader.(PerFieldPostingsFormat.java:195)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.
fieldsProducer(PerFieldPostingsFormat.java:244)
at org.apache.lucene.index.SegmentCoreReaders.(
SegmentCoreReaders.java:115)
at org.apache.lucene.index.SegmentReader.(
SegmentReader.java:95)
at org.apache.lucene.index.ReadersAndUpdates.getReader(
ReadersAndUpdates.java:141)
at org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(
ReadersAndUpdates.java:235)
at org.apache.lucene.index.StandardDirectoryReader.open(
StandardDirectoryReader.java:100)
at org.apache.lucene.index.IndexWriter.getReader(
IndexWriter.java:382)
at org.apache.lucene.index.DirectoryReader.open(
DirectoryReader.java:111)
at org.apache.lucene.search.XSearcherManager.(
XSearcherManager.java:94)
at org.elasticsearch.index.engine.internal.InternalEngine.
buildSearchManager(InternalEngine.java:1462)
at org.elasticsearch.index.engine.internal.
InternalEngine.start(InternalEngine.java:279)
at org.elasticsearch.index.shard.service.InternalIndexShard.
performRecoveryPrepareForTranslog(InternalIndexShard.java:706)
at org.elasticsearch.index.gateway.local.
LocalIndexShardGateway.recover(LocalIndexShardGateway.java:201)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.
run(IndexShardGatewayService.java:189)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 6:47 PM, Hicham Mallah mallah...@gmail.comwrote:

Hello again,

setting bootstrap.mlockall to true seems to have made memory usage
slower, so like at the place of elasticsearch being killed after ~2 hours
it will be killed after ~3 hours. What I see weird, is why is the process
releasing memory one back to the OS but not doing it again? And why is it
not abiding by this DIRECT_SIZE setting too.

Thanks for the help


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:45 PM, Hicham Mallah mallah...@gmail.comwrote:

Jorg the issue is after the JVM giving back memory to the OS, it
starts going up again, and never gives back memory till its killed,
currently memory usage is up to 66% and still going up. HEAP size is
currently set to 8gb which is 1/4 the amount of memory I have. I tried it
at 16, 12, now at 8 but still facing the issue, lowering it more will cause
undesirable speed on the website. I'll try mlockall now, and see what
happens, but looking at Bigdesk on 18.6mb of swap is used.

I'll let you know what happens with mlockall on.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:38 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

From the gist, it alls looks very well. There is no reason for the
OOM killer to kick in. Your system is idle and there is much room for
everything.

Just to quote you:

"What's happening is that elasticsearch starts using memory till
50% then it goes back down to about 30% gradually then starts to go up
again gradually and never goes back down."

What you see is ES JVM process giving back memory to the OS, which
is no reason to worry about in regard to process killing. It is just
undesirable behaviour, and it is all a matter of correct configuration of
the heap size.

You should check if your ES starts from service wrapper or from the
bin folder, and adjust the parameters for heap size. I recommend only to
use ES_HEAP_SIZE parameter. Set this to max. 50% RAM (as you did). But do
not use different values at other places, or use MIN or MAX. ES_HEAP_SIZE
is doing the right thing for you.

With bootstrap mlockall, you can lock the ES JVM process into main
memory, this helps much regarding to performance and fast GC, as it reduces
swapping. You can test if this setting will invoke the OOM killer too, as
it increases the pressure on main memory (but, as said, there is plenty
room in your machine).

Jörg

On Thu, Mar 13, 2014 at 3:13 PM, Hicham Mallah <mallah...@gmail.com

wrote:

Hello Zachary,

Thanks for your reply and the pointer to the settings.

Here are the output of the commands you requested:

curl -XGET "http://localhost:9200/_nodes/stats"
curl -XGET "http://localhost:9200/_nodes"

Elastic Search stats · GitHub


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 3:57 PM, Zachary Tong <zachar...@gmail.com

wrote:

Can you gist up the output of these two commands?

curl -XGET "http://localhost:9200/_nodes/stats"

curl -XGET "http://localhost:9200/_nodes"

Those are my first-stop APIs for determining where memory is
being allocated.

By the way, these settings don't do anything anymore (they were
depreciated and removed):

index.cache.field.type: soft
index.term_index_interval: 256
index.term_index_divisor: 5

index.cache.field.max_size: 10000

max_size was replaced with indices.fielddata.cache.size and
accepts a value like "10gb" or "30%"

And this is just bad settings in general (causes a lot of GC
thrashing):

index.cache.field.expire: 10m

On Thursday, March 13, 2014 8:42:54 AM UTC-4, Hicham Mallah wrote:

Now the process went back down to 25% usage, from now on it will
go back up, and won't stop going up.

Sorry for spamming


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:37 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Here's the top after ~1 hour running:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
780 root 20 0 317g 14g 7.1g S 492.9 46.4 157:50.89 java


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:36 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello Jörg

Thanks for the reply, our swap size is 2g. I don't know at
what % the process is being killed as the first time it happened I wasn't
around, and then I never let that happen again as the website is online.
After 2 hours of running the memory in sure is going up to 60%, I am
restarting each time when it arrives at 70% (2h/2h30) when I am around and
testing config changes. When I am not around, I am setting a cron job to
restart the server every 2 hours. Server has apache and mysql running on it
too.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:22 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

You wrote, the OOM killer killed the ES process. With 32g
(and the swap size), the process must be very big. much more than you
configured. Can you give more info about the live size of the process,
after ~2 hours? Are there more application processes on the box?

Jörg

On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello,

I have been using elasticsearch on a ubuntu server for a
year now, and everything was going great. I had an index of 150,000,000
entries of domain names, running small queries on it, just filtering by 1
term no sorting no wildcard nothing. Now we moved servers, I have now a
CentOS 6 server, 32GB ram and running elasticserach but now we have 2
indices, of about 150 million entries each 32 shards, still running the
same queries on them nothing changed in the queries. But since we went
online with the new server, I have to restart elasticsearch every 2 hours
before OOM killer kills it.

What's happening is that elasticsearch starts using memory
till 50% then it goes back down to about 30% gradually then starts to go up
again gradually and never goes back down.

I have tried all the solutions I found on the net, I am a
developer not a server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:CMSInitiatingOccupancyFraction=75

wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly

wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:CMSInitiatingOccupancyFractio
n=75
wrapper.java.additional.13=-XX:+UseCMSInitiatingOccupancyOnly

wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed
mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",

"build_timestamp" : "2014-02-12T16:18:34Z",
"build_snapshot" : false,
"lucene_version" : "4.6"

}

Using elastica PHP

I have tried playing with values up and down to try to make
it work, but nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to the
Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails
from it, send an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4059bf32-ae3
0-45fa-947c-98ef4540920a%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a
topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/
unsubscribe.
To unsubscribe from this group and all its topics, send an
email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx
98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic
in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/
D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an
email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/f40c285f-
36cb-4062-8ee8-db848503c051%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/
CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%
2B4-g%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-
oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7s
qmXP9v1HgiTcTFXE6DxtQ%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7sqmXP9v1HgiTcTFXE6DxtQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/CAKdsXoGz%3D2ri3521YCwFxxLu_
BkjMRoZ7r1B5L1fkCCLzL_vBw%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoGz%3D2ri3521YCwFxxLu_BkjMRoZ7r1B5L1fkCCLzL_vBw%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/8c65318b-e898-4ec2-9715-c2c2677a23ab%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/8c65318b-e898-4ec2-9715-c2c2677a23ab%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8uQS7AWby8a9TyEoPpLbcEA41-BsDG%2BQZbQy2VtZjfSA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Hello again,

Sorry for the late reply, you're right I don't think it is the OOM killer,
I'll be downgrading my JVM to see what will happen.

Will let you know how it goes.

Thanks.


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Fri, Mar 14, 2014 at 2:10 AM, Zachary Tong zacharyjtong@gmail.comwrote:

Also, can you confirm that it was actually the OOM killer nuking the
process? That will help us to determine if it was OOM killer or a JVM
crash.

OOM killer will log something through dmesg/syslog/etc, showing the OOM
killer killing 'java' and the pid (and some other info I think)

On Thursday, March 13, 2014 7:54:55 PM UTC-4, Zachary Tong wrote:

Yeah, your heap looks fine. I'm inclined to believe that the JVM itself
is crashing, as you suggest. There is at least one known fatal bug in
recent versions of the JVM which directly impacts Lucene/Elasticsearch:

Loading...
[LUCENE-5212] java 7u40 causes sigsegv and corrupt term vectors - ASF JIRA

The currently recommended version for ES is Java 1.7.0_u25. Try
downgrading to that and see if it helps. Sorry, I should have noticed your
JVM version earlier and made the suggestion...totally slipped by me!

-Zach

On Thursday, March 13, 2014 4:41:25 PM UTC-4, Hicham Mallah wrote:

Added index.codec.bloom.load: false to the elasticsearch.yml, doesn't
seem to have changed anything.

It is at 63% after 2 hours and a half up time.

Watching stuff on Bigdesk everything seems to be normal:

Memory:
Committed: 7.8gb
Used: 4.5gb

The used is going up and down normally, so heap is being cleaned no?

So it is working as expected, can't find anything, could it be Oracle
Java, should I try using OpenJDK at the place?!

Really thankful for you guys trying to help me


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 7:23 PM, joerg...@gmail.com joerg...@gmail.comwrote:

There might be massive bloom cache loading for the Lucene codec. My
suggestion is to disable it. Try start ES nodes with

index:
codec:
bloom:
load: false

Bloom cache does not seem to fit perfectly into the diagnostics as you
described, that is just from the exception you sent.

Jörg

On Thu, Mar 13, 2014 at 6:01 PM, Hicham Mallah mallah...@gmail.comwrote:

If I start elasticsearch from the bin folder not using the wrapper, I
get these exceptions after about 2 mins:

Exception in thread "elasticsearch[Adam X][generic][T#5]"
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.fst.BytesStore.(BytesStore.
java:62)
at org.apache.lucene.util.fst.FST.(FST.java:366)
at org.apache.lucene.util.fst.FST.(FST.java:301)
at org.apache.lucene.codecs.BlockTreeTermsReader$
FieldReader.(BlockTreeTermsReader.java:481)
at org.apache.lucene.codecs.BlockTreeTermsReader.(
BlockTreeTermsReader.java:175)
at org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.
fieldsProducer(Lucene41PostingsFormat.java:437)
at org.elasticsearch.index.codec.postingsformat.
BloomFilterPostingsFormat$BloomFilteredFieldsProducer.(
BloomFilterPostingsFormat.java:131)
at org.elasticsearch.index.codec.postingsformat.
BloomFilterPostingsFormat.fieldsProducer(BloomFilterPostingsFormat.
java:102)
at org.elasticsearch.index.codec.postingsformat.
Elasticsearch090PostingsFormat.fieldsProducer(
Elasticsearch090PostingsFormat.java:79)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$
FieldsReader.(PerFieldPostingsFormat.java:195)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.
fieldsProducer(PerFieldPostingsFormat.java:244)
at org.apache.lucene.index.SegmentCoreReaders.(
SegmentCoreReaders.java:115)
at org.apache.lucene.index.SegmentReader.(
SegmentReader.java:95)
at org.apache.lucene.index.ReadersAndUpdates.getReader(
ReadersAndUpdates.java:141)
at org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(
ReadersAndUpdates.java:235)
at org.apache.lucene.index.StandardDirectoryReader.open(
StandardDirectoryReader.java:100)
at org.apache.lucene.index.IndexWriter.getReader(
IndexWriter.java:382)
at org.apache.lucene.index.DirectoryReader.open(
DirectoryReader.java:111)
at org.apache.lucene.search.XSearcherManager.(
XSearcherManager.java:94)
at org.elasticsearch.index.engine.internal.InternalEngine.
buildSearchManager(InternalEngine.java:1462)
at org.elasticsearch.index.engine.internal.
InternalEngine.start(InternalEngine.java:279)
at org.elasticsearch.index.shard.service.InternalIndexShard.
performRecoveryPrepareForTranslog(InternalIndexShard.java:706)
at org.elasticsearch.index.gateway.local.
LocalIndexShardGateway.recover(LocalIndexShardGateway.java:201)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.
run(IndexShardGatewayService.java:189)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 6:47 PM, Hicham Mallah mallah...@gmail.comwrote:

Hello again,

setting bootstrap.mlockall to true seems to have made memory usage
slower, so like at the place of elasticsearch being killed after ~2 hours
it will be killed after ~3 hours. What I see weird, is why is the process
releasing memory one back to the OS but not doing it again? And why is it
not abiding by this DIRECT_SIZE setting too.

Thanks for the help


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:45 PM, Hicham Mallah mallah...@gmail.comwrote:

Jorg the issue is after the JVM giving back memory to the OS, it
starts going up again, and never gives back memory till its killed,
currently memory usage is up to 66% and still going up. HEAP size is
currently set to 8gb which is 1/4 the amount of memory I have. I tried it
at 16, 12, now at 8 but still facing the issue, lowering it more will cause
undesirable speed on the website. I'll try mlockall now, and see what
happens, but looking at Bigdesk on 18.6mb of swap is used.

I'll let you know what happens with mlockall on.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:38 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

From the gist, it alls looks very well. There is no reason for the
OOM killer to kick in. Your system is idle and there is much room for
everything.

Just to quote you:

"What's happening is that elasticsearch starts using memory till
50% then it goes back down to about 30% gradually then starts to go up
again gradually and never goes back down."

What you see is ES JVM process giving back memory to the OS, which
is no reason to worry about in regard to process killing. It is just
undesirable behaviour, and it is all a matter of correct configuration of
the heap size.

You should check if your ES starts from service wrapper or from the
bin folder, and adjust the parameters for heap size. I recommend only to
use ES_HEAP_SIZE parameter. Set this to max. 50% RAM (as you did). But do
not use different values at other places, or use MIN or MAX. ES_HEAP_SIZE
is doing the right thing for you.

With bootstrap mlockall, you can lock the ES JVM process into main
memory, this helps much regarding to performance and fast GC, as it reduces
swapping. You can test if this setting will invoke the OOM killer too, as
it increases the pressure on main memory (but, as said, there is plenty
room in your machine).

Jörg

On Thu, Mar 13, 2014 at 3:13 PM, Hicham Mallah <mallah...@gmail.com

wrote:

Hello Zachary,

Thanks for your reply and the pointer to the settings.

Here are the output of the commands you requested:

curl -XGET "http://localhost:9200/_nodes/stats"
curl -XGET "http://localhost:9200/_nodes"

Elastic Search stats · GitHub


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 3:57 PM, Zachary Tong <zachar...@gmail.com

wrote:

Can you gist up the output of these two commands?

curl -XGET "http://localhost:9200/_nodes/stats"

curl -XGET "http://localhost:9200/_nodes"

Those are my first-stop APIs for determining where memory is
being allocated.

By the way, these settings don't do anything anymore (they were
depreciated and removed):

index.cache.field.type: soft
index.term_index_interval: 256
index.term_index_divisor: 5

index.cache.field.max_size: 10000

max_size was replaced with indices.fielddata.cache.size and
accepts a value like "10gb" or "30%"

And this is just bad settings in general (causes a lot of GC
thrashing):

index.cache.field.expire: 10m

On Thursday, March 13, 2014 8:42:54 AM UTC-4, Hicham Mallah wrote:

Now the process went back down to 25% usage, from now on it will
go back up, and won't stop going up.

Sorry for spamming


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:37 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Here's the top after ~1 hour running:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
780 root 20 0 317g 14g 7.1g S 492.9 46.4 157:50.89 java


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:36 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello Jörg

Thanks for the reply, our swap size is 2g. I don't know at
what % the process is being killed as the first time it happened I wasn't
around, and then I never let that happen again as the website is online.
After 2 hours of running the memory in sure is going up to 60%, I am
restarting each time when it arrives at 70% (2h/2h30) when I am around and
testing config changes. When I am not around, I am setting a cron job to
restart the server every 2 hours. Server has apache and mysql running on it
too.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:22 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

You wrote, the OOM killer killed the ES process. With 32g
(and the swap size), the process must be very big. much more than you
configured. Can you give more info about the live size of the process,
after ~2 hours? Are there more application processes on the box?

Jörg

On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello,

I have been using elasticsearch on a ubuntu server for a
year now, and everything was going great. I had an index of 150,000,000
entries of domain names, running small queries on it, just filtering by 1
term no sorting no wildcard nothing. Now we moved servers, I have now a
CentOS 6 server, 32GB ram and running elasticserach but now we have 2
indices, of about 150 million entries each 32 shards, still running the
same queries on them nothing changed in the queries. But since we went
online with the new server, I have to restart elasticsearch every 2 hours
before OOM killer kills it.

What's happening is that elasticsearch starts using memory
till 50% then it goes back down to about 30% gradually then starts to go up
again gradually and never goes back down.

I have tried all the solutions I found on the net, I am a
developer not a server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:CMSInitiatingOccupancyFraction=75

wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly

wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:CMSInitiatingOccupancyFractio
n=75
wrapper.java.additional.13=-XX:+UseCMSInitiatingOccupancyOnly

wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed
mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",

"build_timestamp" : "2014-02-12T16:18:34Z",
"build_snapshot" : false,
"lucene_version" : "4.6"

}

Using elastica PHP

I have tried playing with values up and down to try to make
it work, but nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to the
Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails
from it, send an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4059bf32-ae3
0-45fa-947c-98ef4540920a%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a
topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/
unsubscribe.
To unsubscribe from this group and all its topics, send an
email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx
98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic
in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/
D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an
email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/f40c285f-
36cb-4062-8ee8-db848503c051%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/
CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%
2B4-g%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-
oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7s
qmXP9v1HgiTcTFXE6DxtQ%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7sqmXP9v1HgiTcTFXE6DxtQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/CAKdsXoGz%3D2ri3521YCwFxxLu_
BkjMRoZ7r1B5L1fkCCLzL_vBw%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoGz%3D2ri3521YCwFxxLu_BkjMRoZ7r1B5L1fkCCLzL_vBw%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/8c65318b-e898-4ec2-9715-c2c2677a23ab%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/8c65318b-e898-4ec2-9715-c2c2677a23ab%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn_Ci8ecaY-OrGhgUVpoW5f5tW9_QFvzQqvE_%2BT%3Dpt-GqQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Downgraded java, that didn't solve my issue,

java version "1.7.0_25"
Java(TM) SE Runtime Environment (build 1.7.0_25-b13)
Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode)

Tried to keep it up without restarting it to see what will happen, and to
see if there will be and .hprof file to check. It went up till 71% memory,
then the whole server was very VERY slow and wasn't responding anymore, so
I had to restart the whole server...

I am out of ideas!


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Fri, Mar 14, 2014 at 10:47 AM, Hicham Mallah mallah.hicham@gmail.comwrote:

Hello again,

Sorry for the late reply, you're right I don't think it is the OOM killer,
I'll be downgrading my JVM to see what will happen.

Will let you know how it goes.

Thanks.


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Fri, Mar 14, 2014 at 2:10 AM, Zachary Tong zacharyjtong@gmail.comwrote:

Also, can you confirm that it was actually the OOM killer nuking the
process? That will help us to determine if it was OOM killer or a JVM
crash.

OOM killer will log something through dmesg/syslog/etc, showing the OOM
killer killing 'java' and the pid (and some other info I think)

On Thursday, March 13, 2014 7:54:55 PM UTC-4, Zachary Tong wrote:

Yeah, your heap looks fine. I'm inclined to believe that the JVM itself
is crashing, as you suggest. There is at least one known fatal bug in
recent versions of the JVM which directly impacts Lucene/Elasticsearch:

Loading...
[LUCENE-5212] java 7u40 causes sigsegv and corrupt term vectors - ASF JIRA

The currently recommended version for ES is Java 1.7.0_u25. Try
downgrading to that and see if it helps. Sorry, I should have noticed your
JVM version earlier and made the suggestion...totally slipped by me!

-Zach

On Thursday, March 13, 2014 4:41:25 PM UTC-4, Hicham Mallah wrote:

Added index.codec.bloom.load: false to the elasticsearch.yml, doesn't
seem to have changed anything.

It is at 63% after 2 hours and a half up time.

Watching stuff on Bigdesk everything seems to be normal:

Memory:
Committed: 7.8gb
Used: 4.5gb

The used is going up and down normally, so heap is being cleaned no?

So it is working as expected, can't find anything, could it be Oracle
Java, should I try using OpenJDK at the place?!

Really thankful for you guys trying to help me


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 7:23 PM, joerg...@gmail.com <joerg...@gmail.com

wrote:

There might be massive bloom cache loading for the Lucene codec. My
suggestion is to disable it. Try start ES nodes with

index:
codec:
bloom:
load: false

Bloom cache does not seem to fit perfectly into the diagnostics as you
described, that is just from the exception you sent.

Jörg

On Thu, Mar 13, 2014 at 6:01 PM, Hicham Mallah mallah...@gmail.comwrote:

If I start elasticsearch from the bin folder not using the wrapper, I
get these exceptions after about 2 mins:

Exception in thread "elasticsearch[Adam X][generic][T#5]"
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.fst.BytesStore.(BytesStore.
java:62)
at org.apache.lucene.util.fst.FST.(FST.java:366)
at org.apache.lucene.util.fst.FST.(FST.java:301)
at org.apache.lucene.codecs.BlockTreeTermsReader$
FieldReader.(BlockTreeTermsReader.java:481)
at org.apache.lucene.codecs.BlockTreeTermsReader.(
BlockTreeTermsReader.java:175)
at org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.
fieldsProducer(Lucene41PostingsFormat.java:437)
at org.elasticsearch.index.codec.postingsformat.
BloomFilterPostingsFormat$BloomFilteredFieldsProducer.(
BloomFilterPostingsFormat.java:131)
at org.elasticsearch.index.codec.postingsformat.
BloomFilterPostingsFormat.fieldsProducer(BloomFilterPostingsFormat.
java:102)
at org.elasticsearch.index.codec.postingsformat.
Elasticsearch090PostingsFormat.fieldsProducer(
Elasticsearch090PostingsFormat.java:79)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$
FieldsReader.(PerFieldPostingsFormat.java:195)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.
fieldsProducer(PerFieldPostingsFormat.java:244)
at org.apache.lucene.index.SegmentCoreReaders.(
SegmentCoreReaders.java:115)
at org.apache.lucene.index.SegmentReader.(
SegmentReader.java:95)
at org.apache.lucene.index.ReadersAndUpdates.getReader(
ReadersAndUpdates.java:141)
at org.apache.lucene.index.ReadersAndUpdates.
getReadOnlyClone(ReadersAndUpdates.java:235)
at org.apache.lucene.index.StandardDirectoryReader.open(
StandardDirectoryReader.java:100)
at org.apache.lucene.index.IndexWriter.getReader(
IndexWriter.java:382)
at org.apache.lucene.index.DirectoryReader.open(
DirectoryReader.java:111)
at org.apache.lucene.search.XSearcherManager.(
XSearcherManager.java:94)
at org.elasticsearch.index.engine.internal.InternalEngine.
buildSearchManager(InternalEngine.java:1462)
at org.elasticsearch.index.engine.internal.
InternalEngine.start(InternalEngine.java:279)
at org.elasticsearch.index.shard.service.InternalIndexShard.
performRecoveryPrepareForTranslog(InternalIndexShard.java:706)
at org.elasticsearch.index.gateway.local.
LocalIndexShardGateway.recover(LocalIndexShardGateway.java:201)
at org.elasticsearch.index.gateway.
IndexShardGatewayService$1.run(IndexShardGatewayService.java:189)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 6:47 PM, Hicham Mallah mallah...@gmail.comwrote:

Hello again,

setting bootstrap.mlockall to true seems to have made memory usage
slower, so like at the place of elasticsearch being killed after ~2 hours
it will be killed after ~3 hours. What I see weird, is why is the process
releasing memory one back to the OS but not doing it again? And why is it
not abiding by this DIRECT_SIZE setting too.

Thanks for the help


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:45 PM, Hicham Mallah mallah...@gmail.comwrote:

Jorg the issue is after the JVM giving back memory to the OS, it
starts going up again, and never gives back memory till its killed,
currently memory usage is up to 66% and still going up. HEAP size is
currently set to 8gb which is 1/4 the amount of memory I have. I tried it
at 16, 12, now at 8 but still facing the issue, lowering it more will cause
undesirable speed on the website. I'll try mlockall now, and see what
happens, but looking at Bigdesk on 18.6mb of swap is used.

I'll let you know what happens with mlockall on.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:38 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

From the gist, it alls looks very well. There is no reason for the
OOM killer to kick in. Your system is idle and there is much room for
everything.

Just to quote you:

"What's happening is that elasticsearch starts using memory till
50% then it goes back down to about 30% gradually then starts to go up
again gradually and never goes back down."

What you see is ES JVM process giving back memory to the OS, which
is no reason to worry about in regard to process killing. It is just
undesirable behaviour, and it is all a matter of correct configuration of
the heap size.

You should check if your ES starts from service wrapper or from
the bin folder, and adjust the parameters for heap size. I recommend only
to use ES_HEAP_SIZE parameter. Set this to max. 50% RAM (as you did). But
do not use different values at other places, or use MIN or MAX.
ES_HEAP_SIZE is doing the right thing for you.

With bootstrap mlockall, you can lock the ES JVM process into main
memory, this helps much regarding to performance and fast GC, as it reduces
swapping. You can test if this setting will invoke the OOM killer too, as
it increases the pressure on main memory (but, as said, there is plenty
room in your machine).

Jörg

On Thu, Mar 13, 2014 at 3:13 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello Zachary,

Thanks for your reply and the pointer to the settings.

Here are the output of the commands you requested:

curl -XGET "http://localhost:9200/_nodes/stats"
curl -XGET "http://localhost:9200/_nodes"

Elastic Search stats · GitHub


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 3:57 PM, Zachary Tong <
zachar...@gmail.com> wrote:

Can you gist up the output of these two commands?

curl -XGET "http://localhost:9200/_nodes/stats"

curl -XGET "http://localhost:9200/_nodes"

Those are my first-stop APIs for determining where memory is
being allocated.

By the way, these settings don't do anything anymore (they were
depreciated and removed):

index.cache.field.type: soft
index.term_index_interval: 256
index.term_index_divisor: 5

index.cache.field.max_size: 10000

max_size was replaced with indices.fielddata.cache.size and
accepts a value like "10gb" or "30%"

And this is just bad settings in general (causes a lot of GC
thrashing):

index.cache.field.expire: 10m

On Thursday, March 13, 2014 8:42:54 AM UTC-4, Hicham Mallah
wrote:

Now the process went back down to 25% usage, from now on it
will go back up, and won't stop going up.

Sorry for spamming


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:37 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Here's the top after ~1 hour running:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
780 root 20 0 317g 14g 7.1g S 492.9 46.4 157:50.89
java


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:36 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello Jörg

Thanks for the reply, our swap size is 2g. I don't know at
what % the process is being killed as the first time it happened I wasn't
around, and then I never let that happen again as the website is online.
After 2 hours of running the memory in sure is going up to 60%, I am
restarting each time when it arrives at 70% (2h/2h30) when I am around and
testing config changes. When I am not around, I am setting a cron job to
restart the server every 2 hours. Server has apache and mysql running on it
too.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:22 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

You wrote, the OOM killer killed the ES process. With 32g
(and the swap size), the process must be very big. much more than you
configured. Can you give more info about the live size of the process,
after ~2 hours? Are there more application processes on the box?

Jörg

On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello,

I have been using elasticsearch on a ubuntu server for a
year now, and everything was going great. I had an index of 150,000,000
entries of domain names, running small queries on it, just filtering by 1
term no sorting no wildcard nothing. Now we moved servers, I have now a
CentOS 6 server, 32GB ram and running elasticserach but now we have 2
indices, of about 150 million entries each 32 shards, still running the
same queries on them nothing changed in the queries. But since we went
online with the new server, I have to restart elasticsearch every 2 hours
before OOM killer kills it.

What's happening is that elasticsearch starts using memory
till 50% then it goes back down to about 30% gradually then starts to go up
again gradually and never goes back down.

I have tried all the solutions I found on the net, I am a
developer not a server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:
CMSInitiatingOccupancyFraction=75
wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly

wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:
CMSInitiatingOccupancyFraction=75
wrapper.java.additional.13=-XX:+
UseCMSInitiatingOccupancyOnly
wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed
mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",

"build_timestamp" : "2014-02-12T16:18:34Z",
"build_snapshot" : false,
"lucene_version" : "4.6"

}

Using elastica PHP

I have tried playing with values up and down to try to make
it work, but nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to the
Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails
from it, send an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4059bf32-
ae30-45fa-947c-98ef4540920a%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a
topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/
unsubscribe.
To unsubscribe from this group and all its topics, send an
email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx
98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic
in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/
D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an
email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/f40c285f-
36cb-4062-8ee8-db848503c051%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the
Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/
CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%
2B4-g%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email
to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-
oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/
CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7sqmXP9v1HgiTcTFXE6DxtQ%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7sqmXP9v1HgiTcTFXE6DxtQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/CAKdsXoGz%3D2ri3521YCwFxxLu_
BkjMRoZ7r1B5L1fkCCLzL_vBw%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoGz%3D2ri3521YCwFxxLu_BkjMRoZ7r1B5L1fkCCLzL_vBw%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/8c65318b-e898-4ec2-9715-c2c2677a23ab%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/8c65318b-e898-4ec2-9715-c2c2677a23ab%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn_GqkLNB7Ytgf3zUKc_O96zFfAwmb%2B9%2BZrBkOKhHWr5Xg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

We should encircle possible causes more closely.

Do you have more incidents in the ES logs that are unusual?

What about memory leaks? Do you use plugins?

How do you use ES caches/filters? Can you see something after making a heap
memory profile?

Jörg

On Fri, Mar 14, 2014 at 3:32 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

Downgraded java, that didn't solve my issue,

java version "1.7.0_25"
Java(TM) SE Runtime Environment (build 1.7.0_25-b13)
Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode)

Tried to keep it up without restarting it to see what will happen, and to
see if there will be and .hprof file to check. It went up till 71% memory,
then the whole server was very VERY slow and wasn't responding anymore, so
I had to restart the whole server...

I am out of ideas!


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Fri, Mar 14, 2014 at 10:47 AM, Hicham Mallah mallah.hicham@gmail.comwrote:

Hello again,

Sorry for the late reply, you're right I don't think it is the OOM
killer, I'll be downgrading my JVM to see what will happen.

Will let you know how it goes.

Thanks.


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Fri, Mar 14, 2014 at 2:10 AM, Zachary Tong zacharyjtong@gmail.comwrote:

Also, can you confirm that it was actually the OOM killer nuking the
process? That will help us to determine if it was OOM killer or a JVM
crash.

OOM killer will log something through dmesg/syslog/etc, showing the OOM
killer killing 'java' and the pid (and some other info I think)

On Thursday, March 13, 2014 7:54:55 PM UTC-4, Zachary Tong wrote:

Yeah, your heap looks fine. I'm inclined to believe that the JVM
itself is crashing, as you suggest. There is at least one known fatal bug
in recent versions of the JVM which directly impacts Lucene/Elasticsearch:

Loading...
[LUCENE-5212] java 7u40 causes sigsegv and corrupt term vectors - ASF JIRA

The currently recommended version for ES is Java 1.7.0_u25. Try
downgrading to that and see if it helps. Sorry, I should have noticed your
JVM version earlier and made the suggestion...totally slipped by me!

-Zach

On Thursday, March 13, 2014 4:41:25 PM UTC-4, Hicham Mallah wrote:

Added index.codec.bloom.load: false to the elasticsearch.yml, doesn't
seem to have changed anything.

It is at 63% after 2 hours and a half up time.

Watching stuff on Bigdesk everything seems to be normal:

Memory:
Committed: 7.8gb
Used: 4.5gb

The used is going up and down normally, so heap is being cleaned no?

So it is working as expected, can't find anything, could it be Oracle
Java, should I try using OpenJDK at the place?!

Really thankful for you guys trying to help me


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 7:23 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

There might be massive bloom cache loading for the Lucene codec. My
suggestion is to disable it. Try start ES nodes with

index:
codec:
bloom:
load: false

Bloom cache does not seem to fit perfectly into the diagnostics as
you described, that is just from the exception you sent.

Jörg

On Thu, Mar 13, 2014 at 6:01 PM, Hicham Mallah mallah...@gmail.comwrote:

If I start elasticsearch from the bin folder not using the wrapper,
I get these exceptions after about 2 mins:

Exception in thread "elasticsearch[Adam X][generic][T#5]"
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.fst.BytesStore.(BytesStore.
java:62)
at org.apache.lucene.util.fst.FST.(FST.java:366)
at org.apache.lucene.util.fst.FST.(FST.java:301)
at org.apache.lucene.codecs.BlockTreeTermsReader$
FieldReader.(BlockTreeTermsReader.java:481)
at org.apache.lucene.codecs.BlockTreeTermsReader.(
BlockTreeTermsReader.java:175)
at org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.
fieldsProducer(Lucene41PostingsFormat.java:437)
at org.elasticsearch.index.codec.postingsformat.
BloomFilterPostingsFormat$BloomFilteredFieldsProducer.(
BloomFilterPostingsFormat.java:131)
at org.elasticsearch.index.codec.postingsformat.
BloomFilterPostingsFormat.fieldsProducer(BloomFilterPostingsFormat.
java:102)
at org.elasticsearch.index.codec.postingsformat.
Elasticsearch090PostingsFormat.fieldsProducer(
Elasticsearch090PostingsFormat.java:79)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$
FieldsReader.(PerFieldPostingsFormat.java:195)
at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.
fieldsProducer(PerFieldPostingsFormat.java:244)
at org.apache.lucene.index.SegmentCoreReaders.(
SegmentCoreReaders.java:115)
at org.apache.lucene.index.SegmentReader.(
SegmentReader.java:95)
at org.apache.lucene.index.ReadersAndUpdates.getReader(
ReadersAndUpdates.java:141)
at org.apache.lucene.index.ReadersAndUpdates.
getReadOnlyClone(ReadersAndUpdates.java:235)
at org.apache.lucene.index.StandardDirectoryReader.open(
StandardDirectoryReader.java:100)
at org.apache.lucene.index.IndexWriter.getReader(
IndexWriter.java:382)
at org.apache.lucene.index.DirectoryReader.open(
DirectoryReader.java:111)
at org.apache.lucene.search.XSearcherManager.(
XSearcherManager.java:94)
at org.elasticsearch.index.engine.internal.InternalEngine.
buildSearchManager(InternalEngine.java:1462)
at org.elasticsearch.index.engine.internal.
InternalEngine.start(InternalEngine.java:279)
at org.elasticsearch.index.shard.service.InternalIndexShard.
performRecoveryPrepareForTranslog(InternalIndexShard.java:706)
at org.elasticsearch.index.gateway.local.
LocalIndexShardGateway.recover(LocalIndexShardGateway.java:201)
at org.elasticsearch.index.gateway.
IndexShardGatewayService$1.run(IndexShardGatewayService.java:189)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 6:47 PM, Hicham Mallah mallah...@gmail.comwrote:

Hello again,

setting bootstrap.mlockall to true seems to have made memory usage
slower, so like at the place of elasticsearch being killed after ~2 hours
it will be killed after ~3 hours. What I see weird, is why is the process
releasing memory one back to the OS but not doing it again? And why is it
not abiding by this DIRECT_SIZE setting too.

Thanks for the help


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:45 PM, Hicham Mallah <mallah...@gmail.com

wrote:

Jorg the issue is after the JVM giving back memory to the OS, it
starts going up again, and never gives back memory till its killed,
currently memory usage is up to 66% and still going up. HEAP size is
currently set to 8gb which is 1/4 the amount of memory I have. I tried it
at 16, 12, now at 8 but still facing the issue, lowering it more will cause
undesirable speed on the website. I'll try mlockall now, and see what
happens, but looking at Bigdesk on 18.6mb of swap is used.

I'll let you know what happens with mlockall on.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:38 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

From the gist, it alls looks very well. There is no reason for
the OOM killer to kick in. Your system is idle and there is much room for
everything.

Just to quote you:

"What's happening is that elasticsearch starts using memory till
50% then it goes back down to about 30% gradually then starts to go up
again gradually and never goes back down."

What you see is ES JVM process giving back memory to the OS,
which is no reason to worry about in regard to process killing. It is just
undesirable behaviour, and it is all a matter of correct configuration of
the heap size.

You should check if your ES starts from service wrapper or from
the bin folder, and adjust the parameters for heap size. I recommend only
to use ES_HEAP_SIZE parameter. Set this to max. 50% RAM (as you did). But
do not use different values at other places, or use MIN or MAX.
ES_HEAP_SIZE is doing the right thing for you.

With bootstrap mlockall, you can lock the ES JVM process into
main memory, this helps much regarding to performance and fast GC, as it
reduces swapping. You can test if this setting will invoke the OOM killer
too, as it increases the pressure on main memory (but, as said, there is
plenty room in your machine).

Jörg

On Thu, Mar 13, 2014 at 3:13 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello Zachary,

Thanks for your reply and the pointer to the settings.

Here are the output of the commands you requested:

curl -XGET "http://localhost:9200/_nodes/stats"
curl -XGET "http://localhost:9200/_nodes"

Elastic Search stats · GitHub


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 3:57 PM, Zachary Tong <
zachar...@gmail.com> wrote:

Can you gist up the output of these two commands?

curl -XGET "http://localhost:9200/_nodes/stats"

curl -XGET "http://localhost:9200/_nodes"

Those are my first-stop APIs for determining where memory is
being allocated.

By the way, these settings don't do anything anymore (they were
depreciated and removed):

index.cache.field.type: soft
index.term_index_interval: 256
index.term_index_divisor: 5

index.cache.field.max_size: 10000

max_size was replaced with indices.fielddata.cache.size and
accepts a value like "10gb" or "30%"

And this is just bad settings in general (causes a lot of GC
thrashing):

index.cache.field.expire: 10m

On Thursday, March 13, 2014 8:42:54 AM UTC-4, Hicham Mallah
wrote:

Now the process went back down to 25% usage, from now on it
will go back up, and won't stop going up.

Sorry for spamming


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:37 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Here's the top after ~1 hour running:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
780 root 20 0 317g 14g 7.1g S 492.9 46.4 157:50.89
java


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:36 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello Jörg

Thanks for the reply, our swap size is 2g. I don't know at
what % the process is being killed as the first time it happened I wasn't
around, and then I never let that happen again as the website is online.
After 2 hours of running the memory in sure is going up to 60%, I am
restarting each time when it arrives at 70% (2h/2h30) when I am around and
testing config changes. When I am not around, I am setting a cron job to
restart the server every 2 hours. Server has apache and mysql running on it
too.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:22 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

You wrote, the OOM killer killed the ES process. With 32g
(and the swap size), the process must be very big. much more than you
configured. Can you give more info about the live size of the process,
after ~2 hours? Are there more application processes on the box?

Jörg

On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello,

I have been using elasticsearch on a ubuntu server for a
year now, and everything was going great. I had an index of 150,000,000
entries of domain names, running small queries on it, just filtering by 1
term no sorting no wildcard nothing. Now we moved servers, I have now a
CentOS 6 server, 32GB ram and running elasticserach but now we have 2
indices, of about 150 million entries each 32 shards, still running the
same queries on them nothing changed in the queries. But since we went
online with the new server, I have to restart elasticsearch every 2 hours
before OOM killer kills it.

What's happening is that elasticsearch starts using memory
till 50% then it goes back down to about 30% gradually then starts to go up
again gradually and never goes back down.

I have tried all the solutions I found on the net, I am a
developer not a server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:
CMSInitiatingOccupancyFraction=75
wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly

wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:
CMSInitiatingOccupancyFraction=75
wrapper.java.additional.13=-XX:+
UseCMSInitiatingOccupancyOnly
wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed
mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",

"build_timestamp" : "2014-02-12T16:18:34Z",
"build_snapshot" : false,
"lucene_version" : "4.6"

}

Using elastica PHP

I have tried playing with values up and down to try to
make it work, but nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to
the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails
from it, send an email to elasticsearc...@googlegroups.com
.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4059bf32-
ae30-45fa-947c-98ef4540920a%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout
.

--
You received this message because you are subscribed to a
topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/
D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an
email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/
CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%
40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic
in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/
D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an
email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/f40c285f-
36cb-4062-8ee8-db848503c051%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the
Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from
it, send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/
CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%
2B4-g%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic
in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/
D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email
to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-
oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/
CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7sqmXP9v1HgiTcTFXE6DxtQ%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7sqmXP9v1HgiTcTFXE6DxtQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/CAKdsXoGz%3D2ri3521YCwFxxLu_
BkjMRoZ7r1B5L1fkCCLzL_vBw%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoGz%3D2ri3521YCwFxxLu_BkjMRoZ7r1B5L1fkCCLzL_vBw%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/8c65318b-e898-4ec2-9715-c2c2677a23ab%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/8c65318b-e898-4ec2-9715-c2c2677a23ab%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn_GqkLNB7Ytgf3zUKc_O96zFfAwmb%2B9%2BZrBkOKhHWr5Xg%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn_GqkLNB7Ytgf3zUKc_O96zFfAwmb%2B9%2BZrBkOKhHWr5Xg%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFvpE9hi4BzOKEP2tQL01BCju-ekw5tN6HiTfk5Qx5iaw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Hello team - I see the recommendation here in this thread to use JDK 1.7
update 25. However in the
websitehttp://www.elasticsearch.org/guide/en/elasticsearch/hadoop/current/requirements.htmlupdate
51 is recommended.

Can you advice as to which version should be used?

-Amit.

On Sat, Mar 15, 2014 at 3:38 AM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

We should encircle possible causes more closely.

Do you have more incidents in the ES logs that are unusual?

What about memory leaks? Do you use plugins?

How do you use ES caches/filters? Can you see something after making a
heap memory profile?

Jörg

On Fri, Mar 14, 2014 at 3:32 PM, Hicham Mallah mallah.hicham@gmail.comwrote:

Downgraded java, that didn't solve my issue,

java version "1.7.0_25"
Java(TM) SE Runtime Environment (build 1.7.0_25-b13)
Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode)

Tried to keep it up without restarting it to see what will happen, and to
see if there will be and .hprof file to check. It went up till 71% memory,
then the whole server was very VERY slow and wasn't responding anymore, so
I had to restart the whole server...

I am out of ideas!


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Fri, Mar 14, 2014 at 10:47 AM, Hicham Mallah mallah.hicham@gmail.comwrote:

Hello again,

Sorry for the late reply, you're right I don't think it is the OOM
killer, I'll be downgrading my JVM to see what will happen.

Will let you know how it goes.

Thanks.


Sincerely:
Hicham Mallah
Software Developer
mallah.hicham@gmail.com
00961 700 49 600

On Fri, Mar 14, 2014 at 2:10 AM, Zachary Tong zacharyjtong@gmail.comwrote:

Also, can you confirm that it was actually the OOM killer nuking the
process? That will help us to determine if it was OOM killer or a JVM
crash.

OOM killer will log something through dmesg/syslog/etc, showing the OOM
killer killing 'java' and the pid (and some other info I think)

On Thursday, March 13, 2014 7:54:55 PM UTC-4, Zachary Tong wrote:

Yeah, your heap looks fine. I'm inclined to believe that the JVM
itself is crashing, as you suggest. There is at least one known fatal bug
in recent versions of the JVM which directly impacts Lucene/Elasticsearch:

Loading...
[LUCENE-5212] java 7u40 causes sigsegv and corrupt term vectors - ASF JIRA

The currently recommended version for ES is Java 1.7.0_u25. Try
downgrading to that and see if it helps. Sorry, I should have noticed your
JVM version earlier and made the suggestion...totally slipped by me!

-Zach

On Thursday, March 13, 2014 4:41:25 PM UTC-4, Hicham Mallah wrote:

Added index.codec.bloom.load: false to the elasticsearch.yml, doesn't
seem to have changed anything.

It is at 63% after 2 hours and a half up time.

Watching stuff on Bigdesk everything seems to be normal:

Memory:
Committed: 7.8gb
Used: 4.5gb

The used is going up and down normally, so heap is being cleaned no?

So it is working as expected, can't find anything, could it be Oracle
Java, should I try using OpenJDK at the place?!

Really thankful for you guys trying to help me


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 7:23 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

There might be massive bloom cache loading for the Lucene codec. My
suggestion is to disable it. Try start ES nodes with

index:
codec:
bloom:
load: false

Bloom cache does not seem to fit perfectly into the diagnostics as
you described, that is just from the exception you sent.

Jörg

On Thu, Mar 13, 2014 at 6:01 PM, Hicham Mallah mallah...@gmail.comwrote:

If I start elasticsearch from the bin folder not using the wrapper,
I get these exceptions after about 2 mins:

Exception in thread "elasticsearch[Adam X][generic][T#5]"
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.fst.BytesStore.(BytesStore.
java:62)
at org.apache.lucene.util.fst.FST.(FST.java:366)
at org.apache.lucene.util.fst.FST.(FST.java:301)
at org.apache.lucene.codecs.BlockTreeTermsReader$
FieldReader.(BlockTreeTermsReader.java:481)
at org.apache.lucene.codecs.BlockTreeTermsReader.(
BlockTreeTermsReader.java:175)
at org.apache.lucene.codecs.lucene41.
Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:
437)
at org.elasticsearch.index.codec.postingsformat.
BloomFilterPostingsFormat$BloomFilteredFieldsProducer.(
BloomFilterPostingsFormat.java:131)
at org.elasticsearch.index.codec.postingsformat.
BloomFilterPostingsFormat.fieldsProducer(BloomFilterPostingsFormat.
java:102)
at org.elasticsearch.index.codec.postingsformat.
Elasticsearch090PostingsFormat.fieldsProducer(
Elasticsearch090PostingsFormat.java:79)
at org.apache.lucene.codecs.perfield.
PerFieldPostingsFormat$FieldsReader.(
PerFieldPostingsFormat.java:195)
at org.apache.lucene.codecs.perfield.
PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:
244)
at org.apache.lucene.index.SegmentCoreReaders.(
SegmentCoreReaders.java:115)
at org.apache.lucene.index.SegmentReader.(
SegmentReader.java:95)
at org.apache.lucene.index.ReadersAndUpdates.getReader(
ReadersAndUpdates.java:141)
at org.apache.lucene.index.ReadersAndUpdates.
getReadOnlyClone(ReadersAndUpdates.java:235)
at org.apache.lucene.index.StandardDirectoryReader.open(
StandardDirectoryReader.java:100)
at org.apache.lucene.index.IndexWriter.getReader(
IndexWriter.java:382)
at org.apache.lucene.index.DirectoryReader.open(
DirectoryReader.java:111)
at org.apache.lucene.search.XSearcherManager.(
XSearcherManager.java:94)
at org.elasticsearch.index.engine.internal.InternalEngine.
buildSearchManager(InternalEngine.java:1462)
at org.elasticsearch.index.engine.internal.
InternalEngine.start(InternalEngine.java:279)
at org.elasticsearch.index.shard.
service.InternalIndexShard.performRecoveryPrepareForTrans
log(InternalIndexShard.java:706)
at org.elasticsearch.index.gateway.local.
LocalIndexShardGateway.recover(LocalIndexShardGateway.java:201)
at org.elasticsearch.index.gateway.
IndexShardGatewayService$1.run(IndexShardGatewayService.java:189)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 6:47 PM, Hicham Mallah <mallah...@gmail.com

wrote:

Hello again,

setting bootstrap.mlockall to true seems to have made memory usage
slower, so like at the place of elasticsearch being killed after ~2 hours
it will be killed after ~3 hours. What I see weird, is why is the process
releasing memory one back to the OS but not doing it again? And why is it
not abiding by this DIRECT_SIZE setting too.

Thanks for the help


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:45 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Jorg the issue is after the JVM giving back memory to the OS, it
starts going up again, and never gives back memory till its killed,
currently memory usage is up to 66% and still going up. HEAP size is
currently set to 8gb which is 1/4 the amount of memory I have. I tried it
at 16, 12, now at 8 but still facing the issue, lowering it more will cause
undesirable speed on the website. I'll try mlockall now, and see what
happens, but looking at Bigdesk on 18.6mb of swap is used.

I'll let you know what happens with mlockall on.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 4:38 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

From the gist, it alls looks very well. There is no reason for
the OOM killer to kick in. Your system is idle and there is much room for
everything.

Just to quote you:

"What's happening is that elasticsearch starts using memory till
50% then it goes back down to about 30% gradually then starts to go up
again gradually and never goes back down."

What you see is ES JVM process giving back memory to the OS,
which is no reason to worry about in regard to process killing. It is just
undesirable behaviour, and it is all a matter of correct configuration of
the heap size.

You should check if your ES starts from service wrapper or from
the bin folder, and adjust the parameters for heap size. I recommend only
to use ES_HEAP_SIZE parameter. Set this to max. 50% RAM (as you did). But
do not use different values at other places, or use MIN or MAX.
ES_HEAP_SIZE is doing the right thing for you.

With bootstrap mlockall, you can lock the ES JVM process into
main memory, this helps much regarding to performance and fast GC, as it
reduces swapping. You can test if this setting will invoke the OOM killer
too, as it increases the pressure on main memory (but, as said, there is
plenty room in your machine).

Jörg

On Thu, Mar 13, 2014 at 3:13 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello Zachary,

Thanks for your reply and the pointer to the settings.

Here are the output of the commands you requested:

curl -XGET "http://localhost:9200/_nodes/stats"
curl -XGET "http://localhost:9200/_nodes"

Elastic Search stats · GitHub


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 3:57 PM, Zachary Tong <
zachar...@gmail.com> wrote:

Can you gist up the output of these two commands?

curl -XGET "http://localhost:9200/_nodes/stats"

curl -XGET "http://localhost:9200/_nodes"

Those are my first-stop APIs for determining where memory is
being allocated.

By the way, these settings don't do anything anymore (they
were depreciated and removed):

index.cache.field.type: soft
index.term_index_interval: 256
index.term_index_divisor: 5

index.cache.field.max_size: 10000

max_size was replaced with indices.fielddata.cache.size
and accepts a value like "10gb" or "30%"

And this is just bad settings in general (causes a lot of GC
thrashing):

index.cache.field.expire: 10m

On Thursday, March 13, 2014 8:42:54 AM UTC-4, Hicham Mallah
wrote:

Now the process went back down to 25% usage, from now on it
will go back up, and won't stop going up.

Sorry for spamming


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:37 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Here's the top after ~1 hour running:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
780 root 20 0 317g 14g 7.1g S 492.9 46.4 157:50.89
java


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:36 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello Jörg

Thanks for the reply, our swap size is 2g. I don't know at
what % the process is being killed as the first time it happened I wasn't
around, and then I never let that happen again as the website is online.
After 2 hours of running the memory in sure is going up to 60%, I am
restarting each time when it arrives at 70% (2h/2h30) when I am around and
testing config changes. When I am not around, I am setting a cron job to
restart the server every 2 hours. Server has apache and mysql running on it
too.


Sincerely:
Hicham Mallah
Software Developer
mallah...@gmail.com
00961 700 49 600

On Thu, Mar 13, 2014 at 2:22 PM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

You wrote, the OOM killer killed the ES process. With 32g
(and the swap size), the process must be very big. much more than you
configured. Can you give more info about the live size of the process,
after ~2 hours? Are there more application processes on the box?

Jörg

On Thu, Mar 13, 2014 at 12:46 PM, Hicham Mallah <
mallah...@gmail.com> wrote:

Hello,

I have been using elasticsearch on a ubuntu server for a
year now, and everything was going great. I had an index of 150,000,000
entries of domain names, running small queries on it, just filtering by 1
term no sorting no wildcard nothing. Now we moved servers, I have now a
CentOS 6 server, 32GB ram and running elasticserach but now we have 2
indices, of about 150 million entries each 32 shards, still running the
same queries on them nothing changed in the queries. But since we went
online with the new server, I have to restart elasticsearch every 2 hours
before OOM killer kills it.

What's happening is that elasticsearch starts using
memory till 50% then it goes back down to about 30% gradually then starts
to go up again gradually and never goes back down.

I have tried all the solutions I found on the net, I am a
developer not a server admin.

I have these setting in my service wrapper configuration

set.default.ES_HOME=/home/elasticsearch
set.default.ES_HEAP_SIZE=8192
set.default.MAX_OPEN_FILES=65535
set.default.MAX_LOCKED_MEMORY=10240
set.default.CONF_DIR=/home/elasticsearch/conf
set.default.WORK_DIR=/home/elasticsearch/tmp
set.default.DIRECT_SIZE=4g

Java Additional Parameters

wrapper.java.additional.1=-Delasticsearch-service
wrapper.java.additional.2=-Des.path.home=%ES_HOME%
wrapper.java.additional.3=-Xss256k
wrapper.java.additional.4=-XX:+UseParNewGC
wrapper.java.additional.5=-XX:+UseConcMarkSweepGC
wrapper.java.additional.6=-XX:
CMSInitiatingOccupancyFraction=75
wrapper.java.additional.7=-XX:+UseCMSInitiatingOccupancyOnly

wrapper.java.additional.8=-XX:+HeapDumpOnOutOfMemoryError

wrapper.java.additional.9=-Djava.awt.headless=true
wrapper.java.additional.10=-XX:MinHeapFreeRatio=40
wrapper.java.additional.11=-XX:MaxHeapFreeRatio=70
wrapper.java.additional.12=-XX:
CMSInitiatingOccupancyFraction=75
wrapper.java.additional.13=-XX:+
UseCMSInitiatingOccupancyOnly
wrapper.java.additional.15=-XX:MaxDirectMemorySize=4g

Initial Java Heap Size (in MB)

wrapper.java.initmemory=%ES_HEAP_SIZE%

And these in elasticsearch.yml
ES_MIN_MEM: 5g
ES_MAX_MEM: 5g
#index.store.type=mmapfs
index.cache.field.type: soft
index.cache.field.max_size: 10000
index.cache.field.expire: 10m
index.term_index_interval: 256
index.term_index_divisor: 5

*java version: *
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed
mode)

Elasticsearch version
"version" : {
"number" : "1.0.0",
"build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",

"build_timestamp" : "2014-02-12T16:18:34Z",
"build_snapshot" : false,
"lucene_version" : "4.6"

}

Using elastica PHP

I have tried playing with values up and down to try to
make it work, but nothing is changing.

Please any help would be highly appreciated.

--
You received this message because you are subscribed to
the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails
from it, send an email to elasticsearc...@googlegroups.
com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4059bf32-
ae30-45fa-947c-98ef4540920a%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/4059bf32-ae30-45fa-947c-98ef4540920a%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/op
tout.

--
You received this message because you are subscribed to a
topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/
D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send
an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/
CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%
40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoFcdFx98JugN7oDD0%3DBXMrY5v8-1LtBMdHeAXWJeho67Q%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout
.

--
You received this message because you are subscribed to a
topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/
D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an
email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/f40c285f-
36cb-4062-8ee8-db848503c051%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/f40c285f-36cb-4062-8ee8-db848503c051%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the
Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from
it, send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/
CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%
2B4-g%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn8EZkKCfQ5Pbi-UgXjVWF0OyPnreAFyy%2ByX5Njf70%2B4-g%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic
in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/
D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email
to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-
oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoH-oJ5Fnjeawyv73FDGrdzcKEWaCT0BtMi84Eb%3DuFUT3w%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/
CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7sqmXP9v1HgiTcTFXE6DxtQ%40mail.
gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn9zvyGSfa8rYsFBQBs51Nz7sqmXP9v1HgiTcTFXE6DxtQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/D4WNQZSvqSU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGz%
3D2ri3521YCwFxxLu_BkjMRoZ7r1B5L1fkCCLzL_vBw%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoGz%3D2ri3521YCwFxxLu_BkjMRoZ7r1B5L1fkCCLzL_vBw%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/D4WNQZSvqSU/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/8c65318b-e898-4ec2-9715-c2c2677a23ab%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/8c65318b-e898-4ec2-9715-c2c2677a23ab%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAJf9Rn_GqkLNB7Ytgf3zUKc_O96zFfAwmb%2B9%2BZrBkOKhHWr5Xg%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAJf9Rn_GqkLNB7Ytgf3zUKc_O96zFfAwmb%2B9%2BZrBkOKhHWr5Xg%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFvpE9hi4BzOKEP2tQL01BCju-ekw5tN6HiTfk5Qx5iaw%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAKdsXoFvpE9hi4BzOKEP2tQL01BCju-ekw5tN6HiTfk5Qx5iaw%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAAOGaQKJ2S01e5CKn915-sz6WawqkTOaoNjTos7NundR3UNv6Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.