Hi, I am new in Elasticsearch. I have this problem with elasticsearch and don't understand the real meaning of this message:
[2018-05-25T18:50:02,710][WARN ][o.e.h.n.Netty4HttpServerTransport] [nodo de ramon] caught exception while handling client http traffic, closing connection [id: 0xcd8221fd, L:/192.168.0.234:9200 - R:/192.168.0.234:57662]
java.io.IOException: Conexión reinicializada por la máquina remota
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:?]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[?:?]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[?:?]
at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[?:?]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[?:?]
at io.netty.buffer.PooledHeapByteBuf.setBytes(PooledHeapByteBuf.java:261) ~[netty-buffer-4.1.16.Final.jar:4.1.16.Final]
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1106) ~[netty-buffer-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:343) ~[netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.16.Final.jar:4.1.16.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
My ES structure is as follows:
10 machines with filebeat sending information to another one machine. Logstash receives the information of apache logs, mysql logs and dcim logs. It parses and filters the information and pass to elasticsearch. It creates 5 index per day. And kibana is in the same machine too. The 3 frameworks are in the same machine with this characteristics:
Ubuntu 17 architecture 64, 16GB RAM, 8 cores Intel(R) Xeon(R) CPU E5-2620 v2 2.10GHz and 500GB hard drive.
My configuration for logstash.yml (not the filter because it parses fine):
pipeline:
batch:
size: 125
delay: 5
pipelines.yml:
- pipeline.id: supertuberiaRamon
pipeline.workers: 8
queue.type: persisted
Elasticsearch configuration:
network.host: "192.168.0.234"
http.port: 9200
jvm.options:
Xms represents the initial size of total heap space
Xmx represents the maximum size of total heap space
-Xms8g
-Xmx8g
kibana configuration:
server.port: 5601
server.host: "192.168.0.234"
elasticsearch.url: "http://192.168.0.234:9200"
The rest of the settings options are by default.
I have logstash, elasticsearch and kibana 6.2.2 version without x-pack.
I leave the system on and after 4 hours the machine broke and the message above is the last thing he said.
Can you help me?