Ok,
On one node I also observerved over 150 instances of
NioClientSocketChannel, but only at 16 MB each. So getting netty to
reduce the channel buffer afterwards won't work...
I never have more than 10 threads querying in parallel, so I'm a
little suprised that there are more than 150 instances.
Is this related to the scroll window of 5 min? Or maybe to some
errors/timeouts on the search site which won't cause elasticsearch to
close the channel?
Thanks,
Thibaut
150 instances of
"org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel",
loaded by "sun.misc.Launcher$AppClassLoader @ 0x77cc1e800" occupy
732,664,720 (48.63%) bytes.
Biggest instances:
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797d23f40 - 16,778,904 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797d2c580 - 16,778,904 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797d22c08 - 16,778,856 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797d271a8 - 16,778,856 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797d299a8 - 16,778,856 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797d42d48 - 16,778,856 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797e0c3e0 - 16,778,856 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797ea3570 - 16,778,856 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797f643a8 - 16,778,856 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797cdcaf0 - 16,778,808 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797cdea28 - 16,778,808 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797d00a18 - 16,778,808 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797e0d098 - 16,778,808 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797f42f48 - 16,778,808 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797f4bad0 - 16,778,808 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797f55cb0 - 16,778,808 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797f56988 - 16,778,808 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797f5a9a0 - 16,778,808 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797f5bf30 - 16,778,808 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797f63d20 - 16,778,808 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797f665c8 - 16,778,808 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797f69b60 - 16,778,808 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797f6b7b0 - 16,778,808 (1.11%) bytes.
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketChannel
@ 0x797f6d0e8 - 16,778,808 (1.11%) bytes.
On Tue, Jul 19, 2011 at 8:58 PM, Shay Banon
shay.banon@elasticsearch.com wrote:
Heya,
On Tue, Jul 19, 2011 at 1:19 PM, Thibaut Britz
thibaut.britz@trendiction.com wrote:
Hi,
I'm scrolling over a scan:
SearchRequestBuilder srb =
searchclient.getClient().prepareSearch(indexnames.toArray(new
String[0])).setSearchType(SearchType.SCAN).setScroll("10m");
I only requested the keys, but just saw that I was requesting 10000
hits per shard. I have over > 50 shards, but not all of them return
results. But I assume I got about 200000 results per scroll. I
limited
that.
Changing the cached Pool to a thread pool which releases the threads
after a few seconds should fix this?
No, thats not how netty works..., started a discussion with Trustin
to
see
if we can improve on it.
Thanks,
Thibaut
On Tue, Jul 19, 2011 at 12:01 AM, Shay Banon
shay.banon@elasticsearch.com wrote:
Are you sending large messages around? For example, large bulk
requests,
asking for a very large number of hits in one request?
On Mon, Jul 18, 2011 at 11:40 PM, Thibaut Britz
thibaut.britz@trendiction.com wrote:
Hi,
the data is being kept in the buffer array of the
BigEndianHeapChannelBuffer instances in jetty (see screenshot).
at
org.elasticsearch.common.netty.buffer.BigEndianHeapChannelBuffer
org.elasticsearch.common.netty.buffer.DynamicChannelBuffer
org.elasticsearch.transport.netty.SizeHeaderFrameDecoder
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext
org.elasticsearch.common.netty.channel.DefaultChannelPipeline
Thanks,
Thibaut
On Mon, Jul 18, 2011 at 6:57 PM, Shay Banon
shay.banon@elasticsearch.com wrote:
Can you dig down a bit more and see where its being used?
On Mon, Jul 18, 2011 at 1:24 PM, Thibaut Britz
thibaut.britz@trendiction.com wrote:
Hi,
Our application was running out of memory and I was
investigating
the
issue. It tourned out that elasticsearch was keeping over 600
Megabytes (4 of them > 100 MB) of NioClientSocketChannel
instances,
which aren't cleaned by the GC.
I analyzed the heap in MAT but coultdn't find any referenced
data
(see
screenshot).Is this due to a NIO buffer which isn't read out
completely and is kept in memory? strvalConnected is also set
to
false, thus the memory should get reclaimed.
I'm using 0.16-4 Snapshot with the gc native leak patch
applied.
Any
ideas what might cause this?
Thanks,
Thibaut