Some times I have noticed below warning MSG in ES Data Node
[02:45:15,291][WARN ][transport.netty ] [Lin Sun] Exception
caught on n
etty layer [[id: 0x339db231, /10.2.216.60:1705 => /10.2.216.10:9300]]
java.io.IOException: An operation on a socket could not be performed
because the
system lacked sufficient buffer space or because a queue was full
at sun.nio.ch.SocketDispatcher.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:33)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:100)
at sun.nio.ch.IOUtil.write(IOUtil.java:71)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:
334)
at
org.elasticsearch.common.netty.channel.socket.nio.SocketSendBufferPoo
l$UnpooledSendBuffer.transferTo(SocketSendBufferPool.java:202)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.write0(Ni
oWorker.java:470)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.writeFrom
TaskLoop(NioWorker.java:393)
at
org.elasticsearch.common.netty.channel.socket.nio.NioSocketChannel$Wr
iteTask.run(NioSocketChannel.java:268)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processWr
iteTaskQueue(NioWorker.java:269)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWo
rker.java:200)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(Thread
RenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.IoWorkerRunnable.run(IoW
orkerRunnable.java:46)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExec
utor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor
.java:908)
at java.lang.Thread.run(Thread.java:619)
[03:17:17,152][INFO ][monitor.jvm ] [Lin Sun] [gc]
[ConcurrentMarkSw
eep][2] took [15.1m]/[15.1m], reclaimed [4.1gb], leaving [3.8gb] used,
max [8.1g
b]
[03:36:42,138][INFO ][monitor.jvm ] [Lin Sun] [gc][ParNew]
[20145] t
ook [44s]/[13m], reclaimed [122.9mb], leaving [3.8gb] used, max
[8.1gb]
[04:11:59,017][INFO ][monitor.jvm ] [Lin Sun] [gc][ParNew]
[20146] t
ook [42.4s]/[13.7m], reclaimed [135.7mb], leaving [3.8gb] used, max
[8.1gb]
At this time some Database(large Size DB) was not working properly and
getting slowly, after restart it's come normal.
Is this a memory problem if we increase the memory size will this
issue will get solved ? Or we have to do some maintenance work
regularly?
Some times I have noticed below warning MSG in ES Data Node
[02:45:15,291][WARN ][transport.netty ] [Lin Sun] Exception
caught on n
etty layer [[id: 0x339db231, /10.2.216.60:1705 => /10.2.216.10:9300]]
java.io.IOException: An operation on a socket could not be performed
because the
system lacked sufficient buffer space or because a queue was full
at sun.nio.ch.SocketDispatcher.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:33)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:100)
at sun.nio.ch.IOUtil.write(IOUtil.java:71)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:
334)
at
org.elasticsearch.common.netty.channel.socket.nio.SocketSendBufferPoo
l$UnpooledSendBuffer.transferTo(SocketSendBufferPool.java:202)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.write0(Ni
oWorker.java:470)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.writeFrom
TaskLoop(NioWorker.java:393)
at
org.elasticsearch.common.netty.channel.socket.nio.NioSocketChannel$Wr
iteTask.run(NioSocketChannel.java:268)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processWr
iteTaskQueue(NioWorker.java:269)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWo
rker.java:200)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(Thread
RenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.IoWorkerRunnable.run(IoW
orkerRunnable.java:46)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExec
utor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor
.java:908)
at java.lang.Thread.run(Thread.java:619)
[03:17:17,152][INFO ][monitor.jvm ] [Lin Sun] [gc]
[ConcurrentMarkSw
eep][2] took [15.1m]/[15.1m], reclaimed [4.1gb], leaving [3.8gb] used,
max [8.1g
b]
[03:36:42,138][INFO ][monitor.jvm ] [Lin Sun] [gc][ParNew]
[20145] t
ook [44s]/[13m], reclaimed [122.9mb], leaving [3.8gb] used, max
[8.1gb]
[04:11:59,017][INFO ][monitor.jvm ] [Lin Sun] [gc][ParNew]
[20146] t
ook [42.4s]/[13.7m], reclaimed [135.7mb], leaving [3.8gb] used, max
[8.1gb]
At this time some Database(large Size DB) was not working properly and
getting slowly, after restart it's come normal.
Is this a memory problem if we increase the memory size will this
issue will get solved ? Or we have to do some maintenance work
regularly?
Some times I have noticed below warning MSG in ES Data Node
[02:45:15,291][WARN ][transport.netty ] [Lin Sun] Exception
caught on n
etty layer [[id: 0x339db231, /10.2.216.60:1705 => /10.2.216.10:9300]]
java.io.IOException: An operation on a socket could not be performed
because the
system lacked sufficient buffer space or because a queue was full
at sun.nio.ch.SocketDispatcher.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:33)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:100)
at sun.nio.ch.IOUtil.write(IOUtil.java:71)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:
334)
at
org.elasticsearch.common.netty.channel.socket.nio.SocketSendBufferPoo
l$UnpooledSendBuffer.transferTo(SocketSendBufferPool.java:202)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.write0(Ni
oWorker.java:470)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.writeFrom
TaskLoop(NioWorker.java:393)
at
org.elasticsearch.common.netty.channel.socket.nio.NioSocketChannel$Wr
iteTask.run(NioSocketChannel.java:268)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.processWr
iteTaskQueue(NioWorker.java:269)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWo
rker.java:200)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(Thread
RenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.IoWorkerRunnable.run(IoW
orkerRunnable.java:46)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExec
utor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor
.java:908)
at java.lang.Thread.run(Thread.java:619)
[03:17:17,152][INFO ][monitor.jvm ] [Lin Sun] [gc]
[ConcurrentMarkSw
eep][2] took [15.1m]/[15.1m], reclaimed [4.1gb], leaving [3.8gb] used,
max [8.1g
b]
[03:36:42,138][INFO ][monitor.jvm ] [Lin Sun] [gc][ParNew]
[20145] t
ook [44s]/[13m], reclaimed [122.9mb], leaving [3.8gb] used, max
[8.1gb]
[04:11:59,017][INFO ][monitor.jvm ] [Lin Sun] [gc][ParNew]
[20146] t
ook [42.4s]/[13.7m], reclaimed [135.7mb], leaving [3.8gb] used, max
[8.1gb]
At this time some Database(large Size DB) was not working properly and
getting slowly, after restart it's come normal.
Is this a memory problem if we increase the memory size will this
issue will get solved ? Or we have to do some maintenance work
regularly?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.