Is 4K thread count normal for ES node?


(Mustafa Sener) #1

Hi,
We have a ES (v 0.15.2) node with 4G max memory setting. Yesterday we got
OutOfMemory exception. When we opened heap dump file top consumers are
listed as :
Class Name | Objects |
Shallow Heap | Retained Heap

java.util.HashMap$Entry[] | 636,265 |
235,395,736 | >= 2,052,914,024
java.lang.String | 9,085,491 |
363,419,640 | >= 2,052,655,928
java.util.HashMap | 165,404 |
11,909,088 | >= 2,045,058,128
org.elasticsearch.common.inject.internal.InheritingState| 34,792 |
3,061,696 | >= 1,940,516,872
java.util.HashSet | 3,832 |
91,968 | >= 1,868,911,312
org.elasticsearch.common.inject.internal.WeakKeySet | 34,792 |
835,008 | >= 1,864,349,368
java.util.HashMap$Entry | 3,765,482 |
210,866,992 | >= 1,833,403,016
char[] | 9,023,850 |
1,745,608,584 | >= 1,745,608,584
org.apache.lucene.index.SegmentReader$CoreReaders | 1,423 |
193,528 | >= 926,018,528
org.apache.lucene.index.TermInfosReader | 1,945 |
186,720 | >= 914,385,792
java.lang.Object[] | 2,518,277 |
457,147,488 | >= 772,023,440
java.lang.Thread | 3,578 |
629,728 | >= 717,638,544

As you see one of the biggest consumers in memory consumption is
java.lang.Thread. When we check number of threads we saw that 4K threads are
created. May that be the source of OutOfMemory? It seems that most of the
threads are related async index requests.

--
Mustafa Sener
www.ifountain.com


(Shay Banon) #2

How are you indexing data? Is there a chance that you are overloading the indexing operations?

On Tuesday, June 21, 2011 at 9:25 PM, Mustafa Sener wrote:

Hi,
We have a ES (v 0.15.2) node with 4G max memory setting. Yesterday we got OutOfMemory exception. When we opened heap dump file top consumers are listed as :
Class Name | Objects | Shallow Heap | Retained Heap

java.util.HashMap$Entry[] | 636,265 | 235,395,736 | >= 2,052,914,024
java.lang.String | 9,085,491 | 363,419,640 | >= 2,052,655,928
java.util.HashMap | 165,404 | 11,909,088 | >= 2,045,058,128
org.elasticsearch.common.inject.internal.InheritingState| 34,792 | 3,061,696 | >= 1,940,516,872
java.util.HashSet | 3,832 | 91,968 | >= 1,868,911,312
org.elasticsearch.common.inject.internal.WeakKeySet | 34,792 | 835,008 | >= 1,864,349,368
java.util.HashMap$Entry | 3,765,482 | 210,866,992 | >= 1,833,403,016
char[] | 9,023,850 | 1,745,608,584 | >= 1,745,608,584
org.apache.lucene.index.SegmentReader$CoreReaders | 1,423 | 193,528 | >= 926,018,528
org.apache.lucene.index.TermInfosReader | 1,945 | 186,720 | >= 914,385,792
java.lang.Object[] | 2,518,277 | 457,147,488 | >= 772,023,440
java.lang.Thread | 3,578 | 629,728 | >= 717,638,544

As you see one of the biggest consumers in memory consumption is java.lang.Thread. When we check number of threads we saw that 4K threads are created. May that be the source of OutOfMemory? It seems that most of the threads are related async index requests.

--
Mustafa Sener
www.ifountain.com (http://www.ifountain.com)


(Mustafa Sener) #3

Hi,
That may be the case since we index data using asynch requests. Maybe we
indexed too fast. May that be the reason of OutOfMemory exception? If we had
configured thread pool size setting, could we have prevent OutOfMemory
exception?

On Thu, Jun 23, 2011 at 2:42 PM, Shay Banon shay.banon@elasticsearch.comwrote:

How are you indexing data? Is there a chance that you are overloading the
indexing operations?

On Tuesday, June 21, 2011 at 9:25 PM, Mustafa Sener wrote:

Hi,
We have a ES (v 0.15.2) node with 4G max memory setting. Yesterday we got
OutOfMemory exception. When we opened heap dump file top consumers are
listed as :
Class Name | Objects |
Shallow Heap | Retained Heap


java.util.HashMap$Entry[] | 636,265 |
235,395,736 | >= 2,052,914,024
java.lang.String | 9,085,491 |
363,419,640 | >= 2,052,655,928
java.util.HashMap | 165,404 |
11,909,088 | >= 2,045,058,128
org.elasticsearch.common.inject.internal.InheritingState| 34,792 |
3,061,696 | >= 1,940,516,872
java.util.HashSet | 3,832
| 91,968 | >= 1,868,911,312
org.elasticsearch.common.inject.internal.WeakKeySet | 34,792 |
835,008 | >= 1,864,349,368
java.util.HashMap$Entry | 3,765,482 |
210,866,992 | >= 1,833,403,016
char[] | 9,023,850 |
1,745,608,584 | >= 1,745,608,584
org.apache.lucene.index.SegmentReader$CoreReaders | 1,423 |
193,528 | >= 926,018,528
org.apache.lucene.index.TermInfosReader | 1,945 |
186,720 | >= 914,385,792
java.lang.Object[] | 2,518,277 |
457,147,488 | >= 772,023,440
java.lang.Thread | 3,578 |
629,728 | >= 717,638,544


As you see one of the biggest consumers in memory consumption is
java.lang.Thread. When we check number of threads we saw that 4K threads are
created. May that be the source of OutOfMemory? It seems that most of the
threads are related async index requests.

--
Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com
WebRep
Overall rating


(Shay Banon) #4

Yea, you can configure the indexing thread pool to be of fixed size type, but then the problem would manifest itself in the thread pool queue just getting out of hand. Another option is to configure the indexing thread pool to be blocking, and then it will timeout if it failed to be processed within a specific time. As you can see, all of those means you need to handle it at one way or another on the client side.

On Thursday, June 23, 2011 at 3:26 PM, Mustafa Sener wrote:

Hi,
That may be the case since we index data using asynch requests. Maybe we indexed too fast. May that be the reason of OutOfMemory exception? If we had configured thread pool size setting, could we have prevent OutOfMemory exception?

On Thu, Jun 23, 2011 at 2:42 PM, Shay Banon <shay.banon@elasticsearch.com (mailto:shay.banon@elasticsearch.com)> wrote:

How are you indexing data? Is there a chance that you are overloading the indexing operations?

On Tuesday, June 21, 2011 at 9:25 PM, Mustafa Sener wrote:

Hi,
We have a ES (v 0.15.2) node with 4G max memory setting. Yesterday we got OutOfMemory exception. When we opened heap dump file top consumers are listed as :
Class Name | Objects | Shallow Heap | Retained Heap

java.util.HashMap$Entry[] | 636,265 | 235,395,736 | >= 2,052,914,024
java.lang.String | 9,085,491 | 363,419,640 | >= 2,052,655,928
java.util.HashMap | 165,404 | 11,909,088 | >= 2,045,058,128
org.elasticsearch.common.inject.internal.InheritingState| 34,792 | 3,061,696 | >= 1,940,516,872
java.util.HashSet | 3,832 | 91,968 | >= 1,868,911,312
org.elasticsearch.common.inject.internal.WeakKeySet | 34,792 | 835,008 | >= 1,864,349,368
java.util.HashMap$Entry | 3,765,482 | 210,866,992 | >= 1,833,403,016
char[] | 9,023,850 | 1,745,608,584 | >= 1,745,608,584
org.apache.lucene.index.SegmentReader$CoreReaders | 1,423 | 193,528 | >= 926,018,528
org.apache.lucene.index.TermInfosReader | 1,945 | 186,720 | >= 914,385,792
java.lang.Object[] | 2,518,277 | 457,147,488 | >= 772,023,440
java.lang.Thread | 3,578 | 629,728 | >= 717,638,544

As you see one of the biggest consumers in memory consumption is java.lang.Thread. When we check number of threads we saw that 4K threads are created. May that be the source of OutOfMemory? It seems that most of the threads are related async index requests.

--
Mustafa Sener
www.ifountain.com (http://www.ifountain.com)

--
Mustafa Sener
www.ifountain.com (http://www.ifountain.com)
WebRep

Overall rating


(system) #5