Memory Explosion: Heap Dump in less than one minute

Help! Elasticsearch was working fine, but now it's using up all its heap
space in the matter of a few minutes. I uninstalled the river and am
performing no queries. How do I diagnose the problem? 2-3 minutes after
starting, it runs out of heap space, and I'm not sure how to find out why.

Here is the profile of memory usage:

https://lh6.googleusercontent.com/-La0i_IrQBLA/U9mIyZZDYLI/AAAAAAAAFx0/SfnYVdKvFAw/s1600/elasticsearch-memory.png

And here is the console output. You can see it takes less than a minute
after starting to run out of memory. This isn't even enough time to examine
the indices through marvel.

C:\elasticsearch-1.1.1\bin>elasticsearch
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
[2014-07-30 16:59:02,579][INFO ][node ] [Texas Twister]
version[1.1.1], pid[8572], build[f1585f0/201
4-04-16T14:27:12Z]
[2014-07-30 16:59:02,580][INFO ][node ] [Texas Twister]
initializing ...
[2014-07-30 16:59:02,600][INFO ][plugins ] [Texas Twister]
loaded [marvel], sites [marvel]
[2014-07-30 16:59:06,437][INFO ][node ] [Texas Twister]
initialized
[2014-07-30 16:59:06,437][INFO ][node ] [Texas Twister]
starting ...
[2014-07-30 16:59:06,691][INFO ][transport ] [Texas Twister]
bound_address {inet[/0.0.0.0:9300]}, publish
_address {inet[/192.168.0.6:9300]}
[2014-07-30 16:59:09,862][INFO ][cluster.service ] [Texas Twister]
new_master [Texas Twister][ShQRhZRFQnuZMTRCu
vY9XQ][twilson-THINK][inet[/192.168.0.6:9300]], reason: zen-disco-join
(elected_as_master)
[2014-07-30 16:59:09,902][INFO ][discovery ] [Texas Twister]
elasticsearch/ShQRhZRFQnuZMTRCuvY9XQ
[2014-07-30 16:59:10,213][INFO ][http ] [Texas Twister]
bound_address {inet[/0.0.0.0:9200]}, publish
_address {inet[/192.168.0.6:9200]}
[2014-07-30 16:59:11,631][INFO ][gateway ] [Texas Twister]
recovered [65] indices into cluster_state
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid8572.hprof ...
Heap dump file created [814218130 bytes in 14.202 secs]
Exception in thread "elasticsearch[Texas Twister][generic][T#2]"
java.lang.OutOfMemoryError: Java heap space
at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2397)
at java.lang.Class.getDeclaredFields(Class.java:1806)
at
org.apache.lucene.util.RamUsageEstimator.shallowSizeOfInstance(RamUsageEstimator.java:388)
at
org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.(Lucene42DocValuesProducer.java:101)
at
org.apache.lucene.codecs.lucene42.Lucene42NormsFormat.normsProducer(Lucene42NormsFormat.java:75)
at
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:123)
at
org.apache.lucene.index.SegmentReader.(SegmentReader.java:96)
at
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:141)
at
org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:235)
at
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:101)
at
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:382)
at
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:111)
at
org.apache.lucene.search.SearcherManager.(SearcherManager.java:89)
at
org.elasticsearch.index.engine.internal.InternalEngine.buildSearchManager(InternalEngine.java:1471)
at
org.elasticsearch.index.engine.internal.InternalEngine.start(InternalEngine.java:279)
at
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryPrepareForTranslog(InternalIndexShard
.java:699)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:205)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:197)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/21e9cc63-0c5d-4ea0-96a2-78d817b89236%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

What java version? How much heap have you allocated and how much RAM on the
server?

Basically you have too much data for the heap size, so increasing it will
help.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 31 July 2014 10:11, Tom Wilson twilson650@gmail.com wrote:

Help! Elasticsearch was working fine, but now it's using up all its heap
space in the matter of a few minutes. I uninstalled the river and am
performing no queries. How do I diagnose the problem? 2-3 minutes after
starting, it runs out of heap space, and I'm not sure how to find out why.

Here is the profile of memory usage:

https://lh6.googleusercontent.com/-La0i_IrQBLA/U9mIyZZDYLI/AAAAAAAAFx0/SfnYVdKvFAw/s1600/elasticsearch-memory.png

And here is the console output. You can see it takes less than a minute
after starting to run out of memory. This isn't even enough time to examine
the indices through marvel.

C:\elasticsearch-1.1.1\bin>elasticsearch
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
[2014-07-30 16:59:02,579][INFO ][node ] [Texas
Twister] version[1.1.1], pid[8572], build[f1585f0/201
4-04-16T14:27:12Z]
[2014-07-30 16:59:02,580][INFO ][node ] [Texas
Twister] initializing ...
[2014-07-30 16:59:02,600][INFO ][plugins ] [Texas
Twister] loaded [marvel], sites [marvel]
[2014-07-30 16:59:06,437][INFO ][node ] [Texas
Twister] initialized
[2014-07-30 16:59:06,437][INFO ][node ] [Texas
Twister] starting ...
[2014-07-30 16:59:06,691][INFO ][transport ] [Texas
Twister] bound_address {inet[/0.0.0.0:9300]}, publish
_address {inet[/192.168.0.6:9300]}
[2014-07-30 16:59:09,862][INFO ][cluster.service ] [Texas
Twister] new_master [Texas Twister][ShQRhZRFQnuZMTRCu
vY9XQ][twilson-THINK][inet[/192.168.0.6:9300]], reason: zen-disco-join
(elected_as_master)
[2014-07-30 16:59:09,902][INFO ][discovery ] [Texas
Twister] elasticsearch/ShQRhZRFQnuZMTRCuvY9XQ
[2014-07-30 16:59:10,213][INFO ][http ] [Texas
Twister] bound_address {inet[/0.0.0.0:9200]}, publish
_address {inet[/192.168.0.6:9200]}
[2014-07-30 16:59:11,631][INFO ][gateway ] [Texas
Twister] recovered [65] indices into cluster_state
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid8572.hprof ...
Heap dump file created [814218130 bytes in 14.202 secs]
Exception in thread "elasticsearch[Texas Twister][generic][T#2]"
java.lang.OutOfMemoryError: Java heap space
at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2397)
at java.lang.Class.getDeclaredFields(Class.java:1806)
at
org.apache.lucene.util.RamUsageEstimator.shallowSizeOfInstance(RamUsageEstimator.java:388)
at
org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.(Lucene42DocValuesProducer.java:101)
at
org.apache.lucene.codecs.lucene42.Lucene42NormsFormat.normsProducer(Lucene42NormsFormat.java:75)
at
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:123)
at
org.apache.lucene.index.SegmentReader.(SegmentReader.java:96)
at
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:141)
at
org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:235)
at
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:101)
at
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:382)
at
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:111)
at
org.apache.lucene.search.SearcherManager.(SearcherManager.java:89)
at
org.elasticsearch.index.engine.internal.InternalEngine.buildSearchManager(InternalEngine.java:1471)
at
org.elasticsearch.index.engine.internal.InternalEngine.start(InternalEngine.java:279)
at
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryPrepareForTranslog(InternalIndexShard
.java:699)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:205)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:197)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/21e9cc63-0c5d-4ea0-96a2-78d817b89236%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/21e9cc63-0c5d-4ea0-96a2-78d817b89236%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624ZKrMfbnQQ-SU_ghj4wQuoGr-TuUAZsZH37F7mtko%3D3iA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

JDK 1.7.0_51

It has 512MB of heap, which was enough -- I've been running it like that
for the past few months, and I only have two indexes and around 300-400
documents. This is a development instance I'm running on my local machine.
This only happened when I started it today.

-tom

On Wednesday, July 30, 2014 5:16:11 PM UTC-7, Mark Walkom wrote:

What java version? How much heap have you allocated and how much RAM on
the server?

Basically you have too much data for the heap size, so increasing it will
help.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com <javascript:>
web: www.campaignmonitor.com

On 31 July 2014 10:11, Tom Wilson <twils...@gmail.com <javascript:>>
wrote:

Help! Elasticsearch was working fine, but now it's using up all its heap
space in the matter of a few minutes. I uninstalled the river and am
performing no queries. How do I diagnose the problem? 2-3 minutes after
starting, it runs out of heap space, and I'm not sure how to find out why.

Here is the profile of memory usage:

https://lh6.googleusercontent.com/-La0i_IrQBLA/U9mIyZZDYLI/AAAAAAAAFx0/SfnYVdKvFAw/s1600/elasticsearch-memory.png

And here is the console output. You can see it takes less than a minute
after starting to run out of memory. This isn't even enough time to examine
the indices through marvel.

C:\elasticsearch-1.1.1\bin>elasticsearch
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
[2014-07-30 16:59:02,579][INFO ][node ] [Texas
Twister] version[1.1.1], pid[8572], build[f1585f0/201
4-04-16T14:27:12Z]
[2014-07-30 16:59:02,580][INFO ][node ] [Texas
Twister] initializing ...
[2014-07-30 16:59:02,600][INFO ][plugins ] [Texas
Twister] loaded [marvel], sites [marvel]
[2014-07-30 16:59:06,437][INFO ][node ] [Texas
Twister] initialized
[2014-07-30 16:59:06,437][INFO ][node ] [Texas
Twister] starting ...
[2014-07-30 16:59:06,691][INFO ][transport ] [Texas
Twister] bound_address {inet[/0.0.0.0:9300]}, publish
_address {inet[/192.168.0.6:9300]}
[2014-07-30 16:59:09,862][INFO ][cluster.service ] [Texas
Twister] new_master [Texas Twister][ShQRhZRFQnuZMTRCu
vY9XQ][twilson-THINK][inet[/192.168.0.6:9300]], reason: zen-disco-join
(elected_as_master)
[2014-07-30 16:59:09,902][INFO ][discovery ] [Texas
Twister] elasticsearch/ShQRhZRFQnuZMTRCuvY9XQ
[2014-07-30 16:59:10,213][INFO ][http ] [Texas
Twister] bound_address {inet[/0.0.0.0:9200]}, publish
_address {inet[/192.168.0.6:9200]}
[2014-07-30 16:59:11,631][INFO ][gateway ] [Texas
Twister] recovered [65] indices into cluster_state
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid8572.hprof ...
Heap dump file created [814218130 bytes in 14.202 secs]
Exception in thread "elasticsearch[Texas Twister][generic][T#2]"
java.lang.OutOfMemoryError: Java heap space
at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2397)
at java.lang.Class.getDeclaredFields(Class.java:1806)
at
org.apache.lucene.util.RamUsageEstimator.shallowSizeOfInstance(RamUsageEstimator.java:388)
at
org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.(Lucene42DocValuesProducer.java:101)
at
org.apache.lucene.codecs.lucene42.Lucene42NormsFormat.normsProducer(Lucene42NormsFormat.java:75)
at
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:123)
at
org.apache.lucene.index.SegmentReader.(SegmentReader.java:96)
at
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:141)
at
org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:235)
at
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:101)
at
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:382)
at
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:111)
at
org.apache.lucene.search.SearcherManager.(SearcherManager.java:89)
at
org.elasticsearch.index.engine.internal.InternalEngine.buildSearchManager(InternalEngine.java:1471)
at
org.elasticsearch.index.engine.internal.InternalEngine.start(InternalEngine.java:279)
at
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryPrepareForTranslog(InternalIndexShard
.java:699)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:205)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:197)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/21e9cc63-0c5d-4ea0-96a2-78d817b89236%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/21e9cc63-0c5d-4ea0-96a2-78d817b89236%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/662fdcc9-0ed0-4547-aaf1-26f12efaef91%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Up that to 1GB and see if it starts.
512MB is pretty tiny, you're better off starting at 1/2GB if you can.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 31 July 2014 10:28, Tom Wilson twilson650@gmail.com wrote:

JDK 1.7.0_51

It has 512MB of heap, which was enough -- I've been running it like that
for the past few months, and I only have two indexes and around 300-400
documents. This is a development instance I'm running on my local machine.
This only happened when I started it today.

-tom

On Wednesday, July 30, 2014 5:16:11 PM UTC-7, Mark Walkom wrote:

What java version? How much heap have you allocated and how much RAM on
the server?

Basically you have too much data for the heap size, so increasing it will
help.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com

On 31 July 2014 10:11, Tom Wilson twils...@gmail.com wrote:

Help! Elasticsearch was working fine, but now it's using up all its
heap space in the matter of a few minutes. I uninstalled the river and am
performing no queries. How do I diagnose the problem? 2-3 minutes after
starting, it runs out of heap space, and I'm not sure how to find out why.

Here is the profile of memory usage:

https://lh6.googleusercontent.com/-La0i_IrQBLA/U9mIyZZDYLI/AAAAAAAAFx0/SfnYVdKvFAw/s1600/elasticsearch-memory.png

And here is the console output. You can see it takes less than a minute
after starting to run out of memory. This isn't even enough time to examine
the indices through marvel.

C:\elasticsearch-1.1.1\bin>elasticsearch
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
[2014-07-30 16:59:02,579][INFO ][node ] [Texas
Twister] version[1.1.1], pid[8572], build[f1585f0/201
4-04-16T14:27:12Z]
[2014-07-30 16:59:02,580][INFO ][node ] [Texas
Twister] initializing ...
[2014-07-30 16:59:02,600][INFO ][plugins ] [Texas
Twister] loaded [marvel], sites [marvel]
[2014-07-30 16:59:06,437][INFO ][node ] [Texas
Twister] initialized
[2014-07-30 16:59:06,437][INFO ][node ] [Texas
Twister] starting ...
[2014-07-30 16:59:06,691][INFO ][transport ] [Texas
Twister] bound_address {inet[/0.0.0.0:9300]}, publish
_address {inet[/192.168.0.6:9300]}
[2014-07-30 16:59:09,862][INFO ][cluster.service ] [Texas
Twister] new_master [Texas Twister][ShQRhZRFQnuZMTRCu
vY9XQ][twilson-THINK][inet[/192.168.0.6:9300]], reason: zen-disco-join
(elected_as_master)
[2014-07-30 16:59:09,902][INFO ][discovery ] [Texas
Twister] elasticsearch/ShQRhZRFQnuZMTRCuvY9XQ
[2014-07-30 16:59:10,213][INFO ][http ] [Texas
Twister] bound_address {inet[/0.0.0.0:9200]}, publish
_address {inet[/192.168.0.6:9200]}
[2014-07-30 16:59:11,631][INFO ][gateway ] [Texas
Twister] recovered [65] indices into cluster_state
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid8572.hprof ...
Heap dump file created [814218130 bytes in 14.202 secs]
Exception in thread "elasticsearch[Texas Twister][generic][T#2]"
java.lang.OutOfMemoryError: Java heap space
at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2397)
at java.lang.Class.getDeclaredFields(Class.java:1806)
at org.apache.lucene.util.RamUsageEstimator.
shallowSizeOfInstance(RamUsageEstimator.java:388)
at org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.<
init>(Lucene42DocValuesProducer.java:101)
at org.apache.lucene.codecs.lucene42.Lucene42NormsFormat.
normsProducer(Lucene42NormsFormat.java:75)
at org.apache.lucene.index.SegmentCoreReaders.(
SegmentCoreReaders.java:123)
at org.apache.lucene.index.SegmentReader.(
SegmentReader.java:96)
at org.apache.lucene.index.ReadersAndUpdates.getReader(
ReadersAndUpdates.java:141)
at org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(
ReadersAndUpdates.java:235)
at org.apache.lucene.index.StandardDirectoryReader.open(
StandardDirectoryReader.java:101)
at org.apache.lucene.index.IndexWriter.getReader(
IndexWriter.java:382)
at org.apache.lucene.index.DirectoryReader.open(
DirectoryReader.java:111)
at org.apache.lucene.search.SearcherManager.(
SearcherManager.java:89)
at org.elasticsearch.index.engine.internal.InternalEngine.
buildSearchManager(InternalEngine.java:1471)
at org.elasticsearch.index.engine.internal.InternalEngine.start(
InternalEngine.java:279)
at org.elasticsearch.index.shard.service.InternalIndexShard.
performRecoveryPrepareForTranslog(InternalIndexShard
.java:699)
at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.
recover(LocalIndexShardGateway.java:205)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.
run(IndexShardGatewayService.java:197)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/21e9cc63-0c5d-4ea0-96a2-78d817b89236%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/21e9cc63-0c5d-4ea0-96a2-78d817b89236%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/662fdcc9-0ed0-4547-aaf1-26f12efaef91%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/662fdcc9-0ed0-4547-aaf1-26f12efaef91%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624b-V7fRbbdgxTKMhA6UByQDKZY7q1uc%3D6m_8phi%2Bnkvtw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Upping to 1GB, memory usage seems to level off at 750MB, but there's a
problem in there somewhere. I'm getting a failure message, and the marvel
dashboard isn't able to fetch.

C:\elasticsearch-1.1.1\bin>elasticsearch
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
[2014-07-30 17:33:27,138][INFO ][node ] [Mondo]
version[1.1.1], pid[10864], build[f1585f0/2014-04-16
T14:27:12Z]
[2014-07-30 17:33:27,139][INFO ][node ] [Mondo]
initializing ...
[2014-07-30 17:33:27,163][INFO ][plugins ] [Mondo] loaded
[ldap-river, marvel], sites [marvel]
[2014-07-30 17:33:30,731][INFO ][node ] [Mondo]
initialized
[2014-07-30 17:33:30,731][INFO ][node ] [Mondo]
starting ...
[2014-07-30 17:33:31,027][INFO ][transport ] [Mondo]
bound_address {inet[/0.0.0.0:9300]}, publish_address
{inet[/192.168.0.6:9300]}
[2014-07-30 17:33:34,202][INFO ][cluster.service ] [Mondo]
new_master [Mondo][liyNQAHAS0-8f-qDDqa5Rg][twilson-T
HINK][inet[/192.168.0.6:9300]], reason: zen-disco-join (elected_as_master)
[2014-07-30 17:33:34,239][INFO ][discovery ] [Mondo]
elasticsearch/liyNQAHAS0-8f-qDDqa5Rg
[2014-07-30 17:33:34,600][INFO ][http ] [Mondo]
bound_address {inet[/0.0.0.0:9200]}, publish_address
{inet[/192.168.0.6:9200]}
[2014-07-30 17:33:35,799][INFO ][gateway ] [Mondo]
recovered [66] indices into cluster_state
[2014-07-30 17:33:35,815][INFO ][node ] [Mondo] started
[2014-07-30 17:33:39,823][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:39,830][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:39,837][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:39,838][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:43,973][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:44,212][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:44,357][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:44,501][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:53,294][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:53,309][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:53,310][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:53,310][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:34:03,281][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:34:03,283][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:34:03,286][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:34:45,662][ERROR][marvel.agent.exporter ] [Mondo] create
failure (index:[.marvel-2014.07.31] type: [no
de_stats]): UnavailableShardsException[[.marvel-2014.07.31][0] [2] shardIt,
[0] active : Timeout waiting for [1m], reque
st: org.elasticsearch.action.bulk.BulkShardRequest@39b65640]

On Wednesday, July 30, 2014 5:30:29 PM UTC-7, Mark Walkom wrote:

Up that to 1GB and see if it starts.
512MB is pretty tiny, you're better off starting at 1/2GB if you can.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com <javascript:>
web: www.campaignmonitor.com

On 31 July 2014 10:28, Tom Wilson <twils...@gmail.com <javascript:>>
wrote:

JDK 1.7.0_51

It has 512MB of heap, which was enough -- I've been running it like that
for the past few months, and I only have two indexes and around 300-400
documents. This is a development instance I'm running on my local machine.
This only happened when I started it today.

-tom

On Wednesday, July 30, 2014 5:16:11 PM UTC-7, Mark Walkom wrote:

What java version? How much heap have you allocated and how much RAM on
the server?

Basically you have too much data for the heap size, so increasing it
will help.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com

On 31 July 2014 10:11, Tom Wilson twils...@gmail.com wrote:

Help! Elasticsearch was working fine, but now it's using up all its
heap space in the matter of a few minutes. I uninstalled the river and am
performing no queries. How do I diagnose the problem? 2-3 minutes after
starting, it runs out of heap space, and I'm not sure how to find out why.

Here is the profile of memory usage:

https://lh6.googleusercontent.com/-La0i_IrQBLA/U9mIyZZDYLI/AAAAAAAAFx0/SfnYVdKvFAw/s1600/elasticsearch-memory.png

And here is the console output. You can see it takes less than a
minute after starting to run out of memory. This isn't even enough time to
examine the indices through marvel.

C:\elasticsearch-1.1.1\bin>elasticsearch
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
[2014-07-30 16:59:02,579][INFO ][node ] [Texas
Twister] version[1.1.1], pid[8572], build[f1585f0/201
4-04-16T14:27:12Z]
[2014-07-30 16:59:02,580][INFO ][node ] [Texas
Twister] initializing ...
[2014-07-30 16:59:02,600][INFO ][plugins ] [Texas
Twister] loaded [marvel], sites [marvel]
[2014-07-30 16:59:06,437][INFO ][node ] [Texas
Twister] initialized
[2014-07-30 16:59:06,437][INFO ][node ] [Texas
Twister] starting ...
[2014-07-30 16:59:06,691][INFO ][transport ] [Texas
Twister] bound_address {inet[/0.0.0.0:9300]}, publish
_address {inet[/192.168.0.6:9300]}
[2014-07-30 16:59:09,862][INFO ][cluster.service ] [Texas
Twister] new_master [Texas Twister][ShQRhZRFQnuZMTRCu
vY9XQ][twilson-THINK][inet[/192.168.0.6:9300]], reason: zen-disco-join
(elected_as_master)
[2014-07-30 16:59:09,902][INFO ][discovery ] [Texas
Twister] elasticsearch/ShQRhZRFQnuZMTRCuvY9XQ
[2014-07-30 16:59:10,213][INFO ][http ] [Texas
Twister] bound_address {inet[/0.0.0.0:9200]}, publish
_address {inet[/192.168.0.6:9200]}
[2014-07-30 16:59:11,631][INFO ][gateway ] [Texas
Twister] recovered [65] indices into cluster_state
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid8572.hprof ...
Heap dump file created [814218130 bytes in 14.202 secs]
Exception in thread "elasticsearch[Texas Twister][generic][T#2]"
java.lang.OutOfMemoryError: Java heap space
at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2397)
at java.lang.Class.getDeclaredFields(Class.java:1806)
at org.apache.lucene.util.RamUsageEstimator.
shallowSizeOfInstance(RamUsageEstimator.java:388)
at org.apache.lucene.codecs.lucene42.
Lucene42DocValuesProducer.(Lucene42DocValuesProducer.java:101)
at org.apache.lucene.codecs.lucene42.Lucene42NormsFormat.
normsProducer(Lucene42NormsFormat.java:75)
at org.apache.lucene.index.SegmentCoreReaders.(
SegmentCoreReaders.java:123)
at org.apache.lucene.index.SegmentReader.(
SegmentReader.java:96)
at org.apache.lucene.index.ReadersAndUpdates.getReader(
ReadersAndUpdates.java:141)
at org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(
ReadersAndUpdates.java:235)
at org.apache.lucene.index.StandardDirectoryReader.open(
StandardDirectoryReader.java:101)
at org.apache.lucene.index.IndexWriter.getReader(
IndexWriter.java:382)
at org.apache.lucene.index.DirectoryReader.open(
DirectoryReader.java:111)
at org.apache.lucene.search.SearcherManager.(
SearcherManager.java:89)
at org.elasticsearch.index.engine.internal.InternalEngine.
buildSearchManager(InternalEngine.java:1471)
at org.elasticsearch.index.engine.internal.
InternalEngine.start(InternalEngine.java:279)
at org.elasticsearch.index.shard.service.InternalIndexShard.
performRecoveryPrepareForTranslog(InternalIndexShard
.java:699)
at org.elasticsearch.index.gateway.local.
LocalIndexShardGateway.recover(LocalIndexShardGateway.java:205)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.
run(IndexShardGatewayService.java:197)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/21e9cc63-0c5d-4ea0-96a2-78d817b89236%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/21e9cc63-0c5d-4ea0-96a2-78d817b89236%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/662fdcc9-0ed0-4547-aaf1-26f12efaef91%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/662fdcc9-0ed0-4547-aaf1-26f12efaef91%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/31aa9efb-7a5a-47b3-86f6-2ac9de61edb1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Unless you are attached to the stats you have in the marvel index for today
it might be easier to delete them than try to recover the unavailable
shards.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 31 July 2014 10:36, Tom Wilson twilson650@gmail.com wrote:

Upping to 1GB, memory usage seems to level off at 750MB, but there's a
problem in there somewhere. I'm getting a failure message, and the marvel
dashboard isn't able to fetch.

C:\elasticsearch-1.1.1\bin>elasticsearch
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
[2014-07-30 17:33:27,138][INFO ][node ] [Mondo]
version[1.1.1], pid[10864], build[f1585f0/2014-04-16
T14:27:12Z]
[2014-07-30 17:33:27,139][INFO ][node ] [Mondo]
initializing ...
[2014-07-30 17:33:27,163][INFO ][plugins ] [Mondo] loaded
[ldap-river, marvel], sites [marvel]
[2014-07-30 17:33:30,731][INFO ][node ] [Mondo]
initialized
[2014-07-30 17:33:30,731][INFO ][node ] [Mondo]
starting ...
[2014-07-30 17:33:31,027][INFO ][transport ] [Mondo]
bound_address {inet[/0.0.0.0:9300]}, publish_address
{inet[/192.168.0.6:9300]}
[2014-07-30 17:33:34,202][INFO ][cluster.service ] [Mondo]
new_master [Mondo][liyNQAHAS0-8f-qDDqa5Rg][twilson-T
HINK][inet[/192.168.0.6:9300]], reason: zen-disco-join (elected_as_master)
[2014-07-30 17:33:34,239][INFO ][discovery ] [Mondo]
elasticsearch/liyNQAHAS0-8f-qDDqa5Rg
[2014-07-30 17:33:34,600][INFO ][http ] [Mondo]
bound_address {inet[/0.0.0.0:9200]}, publish_address
{inet[/192.168.0.6:9200]}
[2014-07-30 17:33:35,799][INFO ][gateway ] [Mondo]
recovered [66] indices into cluster_state
[2014-07-30 17:33:35,815][INFO ][node ] [Mondo] started
[2014-07-30 17:33:39,823][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:39,830][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:39,837][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:39,838][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:43,973][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:44,212][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:44,357][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:44,501][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:53,294][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:53,309][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:53,310][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:53,310][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:34:03,281][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:34:03,283][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:34:03,286][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:34:45,662][ERROR][marvel.agent.exporter ] [Mondo] create
failure (index:[.marvel-2014.07.31] type: [no
de_stats]): UnavailableShardsException[[.marvel-2014.07.31][0] [2]
shardIt, [0] active : Timeout waiting for [1m], reque
st: org.elasticsearch.action.bulk.BulkShardRequest@39b65640]

On Wednesday, July 30, 2014 5:30:29 PM UTC-7, Mark Walkom wrote:

Up that to 1GB and see if it starts.
512MB is pretty tiny, you're better off starting at 1/2GB if you can.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com

On 31 July 2014 10:28, Tom Wilson twils...@gmail.com wrote:

JDK 1.7.0_51

It has 512MB of heap, which was enough -- I've been running it like that
for the past few months, and I only have two indexes and around 300-400
documents. This is a development instance I'm running on my local machine.
This only happened when I started it today.

-tom

On Wednesday, July 30, 2014 5:16:11 PM UTC-7, Mark Walkom wrote:

What java version? How much heap have you allocated and how much RAM on
the server?

Basically you have too much data for the heap size, so increasing it
will help.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com

On 31 July 2014 10:11, Tom Wilson twils...@gmail.com wrote:

Help! Elasticsearch was working fine, but now it's using up all its
heap space in the matter of a few minutes. I uninstalled the river and am
performing no queries. How do I diagnose the problem? 2-3 minutes after
starting, it runs out of heap space, and I'm not sure how to find out why.

Here is the profile of memory usage:

https://lh6.googleusercontent.com/-La0i_IrQBLA/U9mIyZZDYLI/AAAAAAAAFx0/SfnYVdKvFAw/s1600/elasticsearch-memory.png

And here is the console output. You can see it takes less than a
minute after starting to run out of memory. This isn't even enough time to
examine the indices through marvel.

C:\elasticsearch-1.1.1\bin>elasticsearch
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
[2014-07-30 16:59:02,579][INFO ][node ] [Texas
Twister] version[1.1.1], pid[8572], build[f1585f0/201
4-04-16T14:27:12Z]
[2014-07-30 16:59:02,580][INFO ][node ] [Texas
Twister] initializing ...
[2014-07-30 16:59:02,600][INFO ][plugins ] [Texas
Twister] loaded [marvel], sites [marvel]
[2014-07-30 16:59:06,437][INFO ][node ] [Texas
Twister] initialized
[2014-07-30 16:59:06,437][INFO ][node ] [Texas
Twister] starting ...
[2014-07-30 16:59:06,691][INFO ][transport ] [Texas
Twister] bound_address {inet[/0.0.0.0:9300]}, publish
_address {inet[/192.168.0.6:9300]}
[2014-07-30 16:59:09,862][INFO ][cluster.service ] [Texas
Twister] new_master [Texas Twister][ShQRhZRFQnuZMTRCu
vY9XQ][twilson-THINK][inet[/192.168.0.6:9300]], reason:
zen-disco-join (elected_as_master)
[2014-07-30 16:59:09,902][INFO ][discovery ] [Texas
Twister] elasticsearch/ShQRhZRFQnuZMTRCuvY9XQ
[2014-07-30 16:59:10,213][INFO ][http ] [Texas
Twister] bound_address {inet[/0.0.0.0:9200]}, publish
_address {inet[/192.168.0.6:9200]}
[2014-07-30 16:59:11,631][INFO ][gateway ] [Texas
Twister] recovered [65] indices into cluster_state
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid8572.hprof ...
Heap dump file created [814218130 bytes in 14.202 secs]
Exception in thread "elasticsearch[Texas Twister][generic][T#2]"
java.lang.OutOfMemoryError: Java heap space
at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2397)
at java.lang.Class.getDeclaredFields(Class.java:1806)
at org.apache.lucene.util.RamUsageEstimator.
shallowSizeOfInstance(RamUsageEstimator.java:388)
at org.apache.lucene.codecs.lucene42.
Lucene42DocValuesProducer.(Lucene42DocValuesProducer.java:101)
at org.apache.lucene.codecs.lucene42.Lucene42NormsFormat.
normsProducer(Lucene42NormsFormat.java:75)
at org.apache.lucene.index.SegmentCoreReaders.(
SegmentCoreReaders.java:123)
at org.apache.lucene.index.SegmentReader.(SegmentReader.
java:96)
at org.apache.lucene.index.ReadersAndUpdates.getReader(
ReadersAndUpdates.java:141)
at org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(
ReadersAndUpdates.java:235)
at org.apache.lucene.index.StandardDirectoryReader.open(
StandardDirectoryReader.java:101)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.
java:382)
at org.apache.lucene.index.DirectoryReader.open(
DirectoryReader.java:111)
at org.apache.lucene.search.SearcherManager.(
SearcherManager.java:89)
at org.elasticsearch.index.engine.internal.InternalEngine.
buildSearchManager(InternalEngine.java:1471)
at org.elasticsearch.index.engine.internal.InternalEngine.
start(InternalEngine.java:279)
at org.elasticsearch.index.shard.service.InternalIndexShard.
performRecoveryPrepareForTranslog(InternalIndexShard
.java:699)
at org.elasticsearch.index.gateway.local.
LocalIndexShardGateway.recover(LocalIndexShardGateway.java:205)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.
run(IndexShardGatewayService.java:197)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/21e9cc63-0c5d-4ea0-96a2-78d817b89236%40goo
glegroups.com
https://groups.google.com/d/msgid/elasticsearch/21e9cc63-0c5d-4ea0-96a2-78d817b89236%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/662fdcc9-0ed0-4547-aaf1-26f12efaef91%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/662fdcc9-0ed0-4547-aaf1-26f12efaef91%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/31aa9efb-7a5a-47b3-86f6-2ac9de61edb1%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/31aa9efb-7a5a-47b3-86f6-2ac9de61edb1%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624Y6MgcJQ2q%2BxJ9vy_npY4gPJmD19V5C8HGu4LDc_Jf4mw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

What exactly do I need to delete and how do I do it?

On Wednesday, July 30, 2014 5:45:03 PM UTC-7, Mark Walkom wrote:

Unless you are attached to the stats you have in the marvel index for
today it might be easier to delete them than try to recover the unavailable
shards.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com <javascript:>
web: www.campaignmonitor.com

On 31 July 2014 10:36, Tom Wilson <twils...@gmail.com <javascript:>>
wrote:

Upping to 1GB, memory usage seems to level off at 750MB, but there's a
problem in there somewhere. I'm getting a failure message, and the marvel
dashboard isn't able to fetch.

C:\elasticsearch-1.1.1\bin>elasticsearch
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
[2014-07-30 17:33:27,138][INFO ][node ] [Mondo]
version[1.1.1], pid[10864], build[f1585f0/2014-04-16
T14:27:12Z]
[2014-07-30 17:33:27,139][INFO ][node ] [Mondo]
initializing ...
[2014-07-30 17:33:27,163][INFO ][plugins ] [Mondo]
loaded [ldap-river, marvel], sites [marvel]
[2014-07-30 17:33:30,731][INFO ][node ] [Mondo]
initialized
[2014-07-30 17:33:30,731][INFO ][node ] [Mondo]
starting ...
[2014-07-30 17:33:31,027][INFO ][transport ] [Mondo]
bound_address {inet[/0.0.0.0:9300]}, publish_address
{inet[/192.168.0.6:9300]}
[2014-07-30 17:33:34,202][INFO ][cluster.service ] [Mondo]
new_master [Mondo][liyNQAHAS0-8f-qDDqa5Rg][twilson-T
HINK][inet[/192.168.0.6:9300]], reason: zen-disco-join (elected_as_master)
[2014-07-30 17:33:34,239][INFO ][discovery ] [Mondo]
elasticsearch/liyNQAHAS0-8f-qDDqa5Rg
[2014-07-30 17:33:34,600][INFO ][http ] [Mondo]
bound_address {inet[/0.0.0.0:9200]}, publish_address
{inet[/192.168.0.6:9200]}
[2014-07-30 17:33:35,799][INFO ][gateway ] [Mondo]
recovered [66] indices into cluster_state
[2014-07-30 17:33:35,815][INFO ][node ] [Mondo]
started
[2014-07-30 17:33:39,823][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:39,830][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:39,837][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:39,838][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:43,973][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:44,212][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:44,357][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:44,501][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:53,294][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:53,309][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:53,310][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:53,310][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:34:03,281][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:34:03,283][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:34:03,286][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:34:45,662][ERROR][marvel.agent.exporter ] [Mondo]
create failure (index:[.marvel-2014.07.31] type: [no
de_stats]): UnavailableShardsException[[.marvel-2014.07.31][0] [2]
shardIt, [0] active : Timeout waiting for [1m], reque
st: org.elasticsearch.action.bulk.BulkShardRequest@39b65640]

On Wednesday, July 30, 2014 5:30:29 PM UTC-7, Mark Walkom wrote:

Up that to 1GB and see if it starts.
512MB is pretty tiny, you're better off starting at 1/2GB if you can.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com

On 31 July 2014 10:28, Tom Wilson twils...@gmail.com wrote:

JDK 1.7.0_51

It has 512MB of heap, which was enough -- I've been running it like
that for the past few months, and I only have two indexes and around
300-400 documents. This is a development instance I'm running on my local
machine. This only happened when I started it today.

-tom

On Wednesday, July 30, 2014 5:16:11 PM UTC-7, Mark Walkom wrote:

What java version? How much heap have you allocated and how much RAM
on the server?

Basically you have too much data for the heap size, so increasing it
will help.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com

On 31 July 2014 10:11, Tom Wilson twils...@gmail.com wrote:

Help! Elasticsearch was working fine, but now it's using up all its
heap space in the matter of a few minutes. I uninstalled the river and am
performing no queries. How do I diagnose the problem? 2-3 minutes after
starting, it runs out of heap space, and I'm not sure how to find out why.

Here is the profile of memory usage:

https://lh6.googleusercontent.com/-La0i_IrQBLA/U9mIyZZDYLI/AAAAAAAAFx0/SfnYVdKvFAw/s1600/elasticsearch-memory.png

And here is the console output. You can see it takes less than a
minute after starting to run out of memory. This isn't even enough time to
examine the indices through marvel.

C:\elasticsearch-1.1.1\bin>elasticsearch
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
[2014-07-30 16:59:02,579][INFO ][node ] [Texas
Twister] version[1.1.1], pid[8572], build[f1585f0/201
4-04-16T14:27:12Z]
[2014-07-30 16:59:02,580][INFO ][node ] [Texas
Twister] initializing ...
[2014-07-30 16:59:02,600][INFO ][plugins ] [Texas
Twister] loaded [marvel], sites [marvel]
[2014-07-30 16:59:06,437][INFO ][node ] [Texas
Twister] initialized
[2014-07-30 16:59:06,437][INFO ][node ] [Texas
Twister] starting ...
[2014-07-30 16:59:06,691][INFO ][transport ] [Texas
Twister] bound_address {inet[/0.0.0.0:9300]}, publish
_address {inet[/192.168.0.6:9300]}
[2014-07-30 16:59:09,862][INFO ][cluster.service ] [Texas
Twister] new_master [Texas Twister][ShQRhZRFQnuZMTRCu
vY9XQ][twilson-THINK][inet[/192.168.0.6:9300]], reason:
zen-disco-join (elected_as_master)
[2014-07-30 16:59:09,902][INFO ][discovery ] [Texas
Twister] elasticsearch/ShQRhZRFQnuZMTRCuvY9XQ
[2014-07-30 16:59:10,213][INFO ][http ] [Texas
Twister] bound_address {inet[/0.0.0.0:9200]}, publish
_address {inet[/192.168.0.6:9200]}
[2014-07-30 16:59:11,631][INFO ][gateway ] [Texas
Twister] recovered [65] indices into cluster_state
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid8572.hprof ...
Heap dump file created [814218130 bytes in 14.202 secs]
Exception in thread "elasticsearch[Texas Twister][generic][T#2]"
java.lang.OutOfMemoryError: Java heap space
at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2397)
at java.lang.Class.getDeclaredFields(Class.java:1806)
at org.apache.lucene.util.RamUsageEstimator.
shallowSizeOfInstance(RamUsageEstimator.java:388)
at org.apache.lucene.codecs.lucene42.
Lucene42DocValuesProducer.(Lucene42DocValuesProducer.java:101)
at org.apache.lucene.codecs.lucene42.Lucene42NormsFormat.
normsProducer(Lucene42NormsFormat.java:75)
at org.apache.lucene.index.SegmentCoreReaders.(
SegmentCoreReaders.java:123)
at org.apache.lucene.index.SegmentReader.(
SegmentReader.java:96)
at org.apache.lucene.index.ReadersAndUpdates.getReader(
ReadersAndUpdates.java:141)
at org.apache.lucene.index.ReadersAndUpdates.
getReadOnlyClone(ReadersAndUpdates.java:235)
at org.apache.lucene.index.StandardDirectoryReader.open(
StandardDirectoryReader.java:101)
at org.apache.lucene.index.IndexWriter.getReader(
IndexWriter.java:382)
at org.apache.lucene.index.DirectoryReader.open(
DirectoryReader.java:111)
at org.apache.lucene.search.SearcherManager.(
SearcherManager.java:89)
at org.elasticsearch.index.engine.internal.InternalEngine.
buildSearchManager(InternalEngine.java:1471)
at org.elasticsearch.index.engine.internal.InternalEngine.
start(InternalEngine.java:279)
at org.elasticsearch.index.shard.service.InternalIndexShard.
performRecoveryPrepareForTranslog(InternalIndexShard
.java:699)
at org.elasticsearch.index.gateway.local.
LocalIndexShardGateway.recover(LocalIndexShardGateway.java:205)
at org.elasticsearch.index.gateway.
IndexShardGatewayService$1.run(IndexShardGatewayService.java:197)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/21e9cc63-0c5d-4ea0-96a2-78d817b89236%40goo
glegroups.com
https://groups.google.com/d/msgid/elasticsearch/21e9cc63-0c5d-4ea0-96a2-78d817b89236%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/662fdcc9-0ed0-4547-aaf1-26f12efaef91%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/662fdcc9-0ed0-4547-aaf1-26f12efaef91%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/31aa9efb-7a5a-47b3-86f6-2ac9de61edb1%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/31aa9efb-7a5a-47b3-86f6-2ac9de61edb1%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/5caa9794-8105-417b-b84e-5cb28dcaa488%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Look into the curator, which should help:

If you have just a single development instance, perhaps Marvel is an
overkill. Do you need historical metrics? If not, just use some other
plugin such as head/bigdesk/hq.

Cheers,

Ivan

On Thu, Jul 31, 2014 at 10:52 AM, Tom Wilson twilson650@gmail.com wrote:

What exactly do I need to delete and how do I do it?

On Wednesday, July 30, 2014 5:45:03 PM UTC-7, Mark Walkom wrote:

Unless you are attached to the stats you have in the marvel index for
today it might be easier to delete them than try to recover the unavailable
shards.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com

On 31 July 2014 10:36, Tom Wilson twils...@gmail.com wrote:

Upping to 1GB, memory usage seems to level off at 750MB, but there's a
problem in there somewhere. I'm getting a failure message, and the marvel
dashboard isn't able to fetch.

C:\elasticsearch-1.1.1\bin>elasticsearch
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
[2014-07-30 17:33:27,138][INFO ][node ] [Mondo]
version[1.1.1], pid[10864], build[f1585f0/2014-04-16
T14:27:12Z]
[2014-07-30 17:33:27,139][INFO ][node ] [Mondo]
initializing ...
[2014-07-30 17:33:27,163][INFO ][plugins ] [Mondo]
loaded [ldap-river, marvel], sites [marvel]
[2014-07-30 17:33:30,731][INFO ][node ] [Mondo]
initialized
[2014-07-30 17:33:30,731][INFO ][node ] [Mondo]
starting ...
[2014-07-30 17:33:31,027][INFO ][transport ] [Mondo]
bound_address {inet[/0.0.0.0:9300]}, publish_address
{inet[/192.168.0.6:9300]}
[2014-07-30 17:33:34,202][INFO ][cluster.service ] [Mondo]
new_master [Mondo][liyNQAHAS0-8f-qDDqa5Rg][twilson-T
HINK][inet[/192.168.0.6:9300]], reason: zen-disco-join
(elected_as_master)
[2014-07-30 17:33:34,239][INFO ][discovery ] [Mondo]
elasticsearch/liyNQAHAS0-8f-qDDqa5Rg
[2014-07-30 17:33:34,600][INFO ][http ] [Mondo]
bound_address {inet[/0.0.0.0:9200]}, publish_address
{inet[/192.168.0.6:9200]}
[2014-07-30 17:33:35,799][INFO ][gateway ] [Mondo]
recovered [66] indices into cluster_state
[2014-07-30 17:33:35,815][INFO ][node ] [Mondo]
started
[2014-07-30 17:33:39,823][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:39,830][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:39,837][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:39,838][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:43,973][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:44,212][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:44,357][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:44,501][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:53,294][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:53,309][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:53,310][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:33:53,310][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:34:03,281][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:34:03,283][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:34:03,286][DEBUG][action.search.type ] [Mondo] All
shards failed for phase: [query_fetch]
[2014-07-30 17:34:45,662][ERROR][marvel.agent.exporter ] [Mondo]
create failure (index:[.marvel-2014.07.31] type: [no
de_stats]): UnavailableShardsException[[.marvel-2014.07.31][0] [2]
shardIt, [0] active : Timeout waiting for [1m], reque
st: org.elasticsearch.action.bulk.BulkShardRequest@39b65640]

On Wednesday, July 30, 2014 5:30:29 PM UTC-7, Mark Walkom wrote:

Up that to 1GB and see if it starts.
512MB is pretty tiny, you're better off starting at 1/2GB if you can.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com

On 31 July 2014 10:28, Tom Wilson twils...@gmail.com wrote:

JDK 1.7.0_51

It has 512MB of heap, which was enough -- I've been running it like
that for the past few months, and I only have two indexes and around
300-400 documents. This is a development instance I'm running on my local
machine. This only happened when I started it today.

-tom

On Wednesday, July 30, 2014 5:16:11 PM UTC-7, Mark Walkom wrote:

What java version? How much heap have you allocated and how much RAM
on the server?

Basically you have too much data for the heap size, so increasing it
will help.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com

On 31 July 2014 10:11, Tom Wilson twils...@gmail.com wrote:

Help! Elasticsearch was working fine, but now it's using up all
its heap space in the matter of a few minutes. I uninstalled the river and
am performing no queries. How do I diagnose the problem? 2-3 minutes after
starting, it runs out of heap space, and I'm not sure how to find out why.

Here is the profile of memory usage:

https://lh6.googleusercontent.com/-La0i_IrQBLA/U9mIyZZDYLI/AAAAAAAAFx0/SfnYVdKvFAw/s1600/elasticsearch-memory.png

And here is the console output. You can see it takes less than a
minute after starting to run out of memory. This isn't even enough time to
examine the indices through marvel.

C:\elasticsearch-1.1.1\bin>elasticsearch
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
[2014-07-30 16:59:02,579][INFO ][node ] [Texas
Twister] version[1.1.1], pid[8572], build[f1585f0/201
4-04-16T14:27:12Z]
[2014-07-30 16:59:02,580][INFO ][node ] [Texas
Twister] initializing ...
[2014-07-30 16:59:02,600][INFO ][plugins ] [Texas
Twister] loaded [marvel], sites [marvel]
[2014-07-30 16:59:06,437][INFO ][node ] [Texas
Twister] initialized
[2014-07-30 16:59:06,437][INFO ][node ] [Texas
Twister] starting ...
[2014-07-30 16:59:06,691][INFO ][transport ] [Texas
Twister] bound_address {inet[/0.0.0.0:9300]}, publish
_address {inet[/192.168.0.6:9300]}
[2014-07-30 16:59:09,862][INFO ][cluster.service ] [Texas
Twister] new_master [Texas Twister][ShQRhZRFQnuZMTRCu
vY9XQ][twilson-THINK][inet[/192.168.0.6:9300]], reason:
zen-disco-join (elected_as_master)
[2014-07-30 16:59:09,902][INFO ][discovery ] [Texas
Twister] elasticsearch/ShQRhZRFQnuZMTRCuvY9XQ
[2014-07-30 16:59:10,213][INFO ][http ] [Texas
Twister] bound_address {inet[/0.0.0.0:9200]}, publish
_address {inet[/192.168.0.6:9200]}
[2014-07-30 16:59:11,631][INFO ][gateway ] [Texas
Twister] recovered [65] indices into cluster_state
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid8572.hprof ...
Heap dump file created [814218130 bytes in 14.202 secs]
Exception in thread "elasticsearch[Texas Twister][generic][T#2]"
java.lang.OutOfMemoryError: Java heap space
at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2397)
at java.lang.Class.getDeclaredFields(Class.java:1806)
at org.apache.lucene.util.RamUsageEstimator.
shallowSizeOfInstance(RamUsageEstimator.java:388)
at org.apache.lucene.codecs.lucene42.
Lucene42DocValuesProducer.(Lucene42DocValuesProducer.java:101)
at org.apache.lucene.codecs.lucene42.Lucene42NormsFormat.
normsProducer(Lucene42NormsFormat.java:75)
at org.apache.lucene.index.SegmentCoreReaders.(
SegmentCoreReaders.java:123)
at org.apache.lucene.index.SegmentReader.(
SegmentReader.java:96)
at org.apache.lucene.index.ReadersAndUpdates.getReader(
ReadersAndUpdates.java:141)
at org.apache.lucene.index.ReadersAndUpdates.
getReadOnlyClone(ReadersAndUpdates.java:235)
at org.apache.lucene.index.StandardDirectoryReader.open(
StandardDirectoryReader.java:101)
at org.apache.lucene.index.IndexWriter.getReader(
IndexWriter.java:382)
at org.apache.lucene.index.DirectoryReader.open(
DirectoryReader.java:111)
at org.apache.lucene.search.SearcherManager.(
SearcherManager.java:89)
at org.elasticsearch.index.engine.internal.InternalEngine.
buildSearchManager(InternalEngine.java:1471)
at org.elasticsearch.index.engine.internal.InternalEngine.
start(InternalEngine.java:279)
at org.elasticsearch.index.shard.service.InternalIndexShard.
performRecoveryPrepareForTranslog(InternalIndexShard
.java:699)
at org.elasticsearch.index.gateway.local.
LocalIndexShardGateway.recover(LocalIndexShardGateway.java:205)
at org.elasticsearch.index.gateway.
IndexShardGatewayService$1.run(IndexShardGatewayService.java:197)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/21e9cc63-0c5
d-4ea0-96a2-78d817b89236%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/21e9cc63-0c5d-4ea0-96a2-78d817b89236%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/662fdcc9-0ed0-4547-aaf1-26f12efaef91%40goo
glegroups.com
https://groups.google.com/d/msgid/elasticsearch/662fdcc9-0ed0-4547-aaf1-26f12efaef91%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/31aa9efb-7a5a-47b3-86f6-2ac9de61edb1%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/31aa9efb-7a5a-47b3-86f6-2ac9de61edb1%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/5caa9794-8105-417b-b84e-5cb28dcaa488%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/5caa9794-8105-417b-b84e-5cb28dcaa488%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQBQbkgtD%3DSUoxZr-k-fPC2YmgXc2LOSjZ6J8R7w%2B2dTkg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.