Mismatch cluster status between Kibana and Curl

Hello,

I'm getting two different output of the same request "cluster status". Here the two outputs

<my_laptop_name>:~ me$ curl -XGET es.mydomain.com:9200/_cluster/health/status?pretty
{
  "cluster_name" : "my-es",
  "status" : "red",
  "timed_out" : true,
  "number_of_nodes" : 5,
  "number_of_data_nodes" : 5,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
pas-macbook-pro:~ pfuentes$

Any Thoughts?

Welcome to our community! :smiley:

Sometimes Monitoring can be delayed with its status. How long has it been like that?

Around 3 weeks ago the cluster triggered the alert "Low disk watermark 85% exceeded" all volumes in all nodes where getting out of space. a couple of days after cluster health change to the state showed above. I've deleted the oldest shards and recovered almost half space of the storage cut cluster health still showing Timed_out: true

Is es.mydomain.com a URI you have setup, or did you substitute that for a node name?
Can you try another node, or a direct node IP/URI?

I'm not using es.mydomain.com as a domain name. I've replaced the real name of the node just for posting the issue

Ok, but do other nodes all report the same status?

Yes all nodes report the same status. Here the output from each node

Node 1

[me@es01 ~]$ curl -XGET localhost:9500/_cluster/health/status?pretty
{
  "cluster_name" : "cla-elastic7",
  "status" : "red",
  "timed_out" : true,
  "number_of_nodes" : 5,
  "number_of_data_nodes" : 5,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
[me@es01 ~]$

Node 2

[me@es02 ~]$ curl -XGET localhost:9500/_cluster/health/status?pretty
{
  "cluster_name" : "cla-elastic7",
  "status" : "red",
  "timed_out" : true,
  "number_of_nodes" : 5,
  "number_of_data_nodes" : 5,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
[me@es02 ~]$

Node 3

[me@es03 ~]$ curl -XGET localhost:9500/_cluster/health/status?pretty
{
  "cluster_name" : "cla-elastic7",
  "status" : "red",
  "timed_out" : true,
  "number_of_nodes" : 5,
  "number_of_data_nodes" : 5,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
[me@es03 ~]$

Node 4

[me@es04 ~]$ curl -XGET localhost:9500/_cluster/health/status?pretty
{
  "cluster_name" : "cla-elastic7",
  "status" : "red",
  "timed_out" : true,
  "number_of_nodes" : 5,
  "number_of_data_nodes" : 5,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
[me@es04 ~]$

Node 5

[me@es05 ~]$ curl -XGET localhost:9500/_cluster/health/status?pretty
{
  "cluster_name" : "cla-elastic7",
  "status" : "red",
  "timed_out" : true,
  "number_of_nodes" : 5,
  "number_of_data_nodes" : 5,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
[me@es05 ~]$

That's kinda weird.

What do the Elasticsearch logs from one of the nodes show?

Node 1 (Master)

[2021-06-23T02:16:00,000][INFO ][o.e.x.m.MlDailyMaintenanceService] [atl-cla-prodes01] triggering scheduled [ML] maintenance tasks
[2021-06-23T02:16:00,006][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [atl-cla-prodes01] Deleting expired data
[2021-06-23T02:16:00,010][INFO ][o.e.x.m.j.r.UnusedStatsRemover] [atl-cla-prodes01] Successfully deleted [0] unused stats documents
[2021-06-23T02:16:00,010][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [atl-cla-prodes01] Completed deletion of expired ML data
[2021-06-23T02:16:00,010][INFO ][o.e.x.m.MlDailyMaintenanceService] [atl-cla-prodes01] Successfully completed [ML] maintenance task: triggerDeleteExpiredDataTask
[2021-06-23T10:19:54,692][INFO ][o.e.x.m.a.TransportMonitoringMigrateAlertsAction] [atl-cla-prodes01] THREAD NAME: {}elasticsearch[atl-cla-prodes01][management][T#2]
[2021-06-23T20:00:00,323][INFO ][o.e.c.m.MetadataCreateIndexService] [atl-cla-prodes01] [claservers2021.06.24] creating index, cause [auto(bulk api)], templates [], shards [1]/[1]
[2021-06-23T20:00:00,647][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [claservers2021.06.24/6HB4OxdGTAuyqi8vXmEWWQ] create_mapping [_doc]
[2021-06-23T20:00:01,207][INFO ][o.e.c.r.a.AllocationService] [atl-cla-prodes01] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[claservers2021.06.24][0]]]).
[2021-06-23T20:00:01,399][INFO ][o.e.c.m.MetadataCreateIndexService] [atl-cla-prodes01] [.monitoring-kibana-7-2021.06.24] creating index, cause [auto(bulk api)], templates [.monitoring-kibana], shards [1]/[0]
[2021-06-23T20:00:01,405][INFO ][o.e.c.r.a.AllocationService] [atl-cla-prodes01] updating number_of_replicas to [1] for indices [.monitoring-kibana-7-2021.06.24]
[2021-06-23T20:00:01,807][INFO ][o.e.c.m.MetadataCreateIndexService] [atl-cla-prodes01] [.monitoring-es-7-2021.06.24] creating index, cause [auto(bulk api)], templates [.monitoring-es], shards [1]/[0]
[2021-06-23T20:00:01,816][INFO ][o.e.c.r.a.AllocationService] [atl-cla-prodes01] updating number_of_replicas to [1] for indices [.monitoring-es-7-2021.06.24]
[2021-06-23T20:00:02,095][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [claservers2021.06.24/6HB4OxdGTAuyqi8vXmEWWQ] update_mapping [_doc]
[2021-06-23T20:00:02,155][INFO ][o.e.c.m.MetadataCreateIndexService] [atl-cla-prodes01] [elmsapplogs-2021.06.24] creating index, cause [auto(bulk api)], templates [], shards [1]/[1]
[2021-06-23T20:00:02,690][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [elmsapplogs-2021.06.24/_Qkx1YAkQc6mhb1LDX8dqw] create_mapping [_doc]
[2021-06-23T20:00:02,695][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [elmsapplogs-2021.06.24/_Qkx1YAkQc6mhb1LDX8dqw] update_mapping [_doc]
[2021-06-23T20:00:02,780][INFO ][o.e.c.r.a.AllocationService] [atl-cla-prodes01] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[elmsapplogs-2021.06.24][0]]]).
[2021-06-23T20:00:02,897][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [elmsapplogs-2021.06.24/_Qkx1YAkQc6mhb1LDX8dqw] update_mapping [_doc]
[2021-06-23T20:00:03,524][INFO ][o.e.c.m.MetadataCreateIndexService] [atl-cla-prodes01] [.monitoring-logstash-7-2021.06.24] creating index, cause [auto(bulk api)], templates [.monitoring-logstash], shards [1]/[0]
[2021-06-23T20:00:03,533][INFO ][o.e.c.r.a.AllocationService] [atl-cla-prodes01] updating number_of_replicas to [1] for indices [.monitoring-logstash-7-2021.06.24]
[2021-06-23T20:00:04,170][INFO ][o.e.c.r.a.AllocationService] [atl-cla-prodes01] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-logstash-7-2021.06.24][0]]]).
[2021-06-23T20:00:04,464][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [elmsapplogs-2021.06.24/_Qkx1YAkQc6mhb1LDX8dqw] update_mapping [_doc]
[2021-06-23T20:00:09,716][INFO ][o.e.c.m.MetadataCreateIndexService] [atl-cla-prodes01] [firewalls-asa2021.06.24] creating index, cause [auto(bulk api)], templates [], shards [1]/[1]
[2021-06-23T20:00:10,046][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [firewalls-asa2021.06.24/MJKBCQy3QLmILQBrdG5snQ] create_mapping [_doc]
[2021-06-23T20:00:10,128][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [firewalls-asa2021.06.24/MJKBCQy3QLmILQBrdG5snQ] update_mapping [_doc]
[2021-06-23T20:00:10,208][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [firewalls-asa2021.06.24/MJKBCQy3QLmILQBrdG5snQ] update_mapping [_doc]
[2021-06-23T20:00:10,273][INFO ][o.e.c.r.a.AllocationService] [atl-cla-prodes01] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[firewalls-asa2021.06.24][0]]]).
[2021-06-23T20:00:10,327][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [firewalls-asa2021.06.24/MJKBCQy3QLmILQBrdG5snQ] update_mapping [_doc]
[2021-06-23T20:00:10,413][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [firewalls-asa2021.06.24/MJKBCQy3QLmILQBrdG5snQ] update_mapping [_doc]
[2021-06-23T20:00:10,477][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [firewalls-asa2021.06.24/MJKBCQy3QLmILQBrdG5snQ] update_mapping [_doc]
[2021-06-23T20:00:10,536][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [firewalls-asa2021.06.24/MJKBCQy3QLmILQBrdG5snQ] update_mapping [_doc]
[2021-06-23T20:00:10,624][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [firewalls-asa2021.06.24/MJKBCQy3QLmILQBrdG5snQ] update_mapping [_doc]
[2021-06-23T20:00:13,086][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [claservers2021.06.24/6HB4OxdGTAuyqi8vXmEWWQ] update_mapping [_doc]
[2021-06-23T20:00:16,531][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [firewalls-asa2021.06.24/MJKBCQy3QLmILQBrdG5snQ] update_mapping [_doc]
[2021-06-23T20:00:36,204][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [firewalls-asa2021.06.24/MJKBCQy3QLmILQBrdG5snQ] update_mapping [_doc]
[2021-06-23T20:00:36,684][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [firewalls-asa2021.06.24/MJKBCQy3QLmILQBrdG5snQ] update_mapping [_doc]
[2021-06-23T20:00:41,603][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [firewalls-asa2021.06.24/MJKBCQy3QLmILQBrdG5snQ] update_mapping [_doc]
[2021-06-23T20:02:03,128][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [firewalls-asa2021.06.24/MJKBCQy3QLmILQBrdG5snQ] update_mapping [_doc]
[2021-06-23T20:05:06,491][INFO ][o.e.c.m.MetadataCreateIndexService] [atl-cla-prodes01] [windows2021.06.24] creating index, cause [auto(bulk api)], templates [], shards [1]/[1]
[2021-06-23T20:05:06,804][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [windows2021.06.24/v5smNHsVQMeLLDMYeELhvA] create_mapping [_doc]
[2021-06-23T20:05:06,920][INFO ][o.e.c.r.a.AllocationService] [atl-cla-prodes01] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[windows2021.06.24][0]]]).
[2021-06-23T20:21:15,561][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [firewalls-asa2021.06.24/MJKBCQy3QLmILQBrdG5snQ] update_mapping [_doc]
[2021-06-23T20:52:51,550][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [firewalls-asa2021.06.24/MJKBCQy3QLmILQBrdG5snQ] update_mapping [_doc]
[2021-06-23T21:00:00,000][INFO ][o.e.x.m.e.l.LocalExporter] [atl-cla-prodes01] cleaning up [2] old indices
[2021-06-23T21:00:00,002][INFO ][o.e.c.m.MetadataDeleteIndexService] [atl-cla-prodes01] [.monitoring-logstash-7-2021.06.17/rV4zJ7nKR22Q0-uLvosHAg] deleting index
[2021-06-23T21:00:00,002][INFO ][o.e.c.m.MetadataDeleteIndexService] [atl-cla-prodes01] [.monitoring-es-7-2021.06.17/IkyjubKURoSFS28RfV2bAg] deleting index
[2021-06-23T21:30:00,001][INFO ][o.e.x.s.SnapshotRetentionTask] [atl-cla-prodes01] starting SLM retention snapshot cleanup task
[2021-06-23T21:30:00,002][INFO ][o.e.x.s.SnapshotRetentionTask] [atl-cla-prodes01] there are no repositories to fetch, SLM retention snapshot cleanup task complete
2021-06-24T02:16:00,001][INFO ][o.e.x.m.MlDailyMaintenanceService] [atl-cla-prodes01] triggering scheduled [ML] maintenance tasks
[2021-06-24T02:16:00,006][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [atl-cla-prodes01] Deleting expired data
[2021-06-24T02:16:00,009][INFO ][o.e.x.m.j.r.UnusedStatsRemover] [atl-cla-prodes01] Successfully deleted [0] unused stats documents
[2021-06-24T02:16:00,009][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [atl-cla-prodes01] Completed deletion of expired ML data
[2021-06-24T02:16:00,009][INFO ][o.e.x.m.MlDailyMaintenanceService] [atl-cla-prodes01] Successfully completed [ML] maintenance task: triggerDeleteExpiredDataTask
[2021-06-24T06:30:38,957][INFO ][o.e.c.m.MetadataMappingService] [atl-cla-prodes01] [firewalls-asa2021.06.24/MJKBCQy3QLmILQBrdG5snQ] update_mapping [_doc]

Node 2

[2021-06-22T21:43:06,737][WARN ][r.suppressed             ] [atl-cla-prodes02] path: _search, params: {callback=jQuery695199615_353575044, source={"size":1,"query":{"filtered":{"query":{"match_all":{}}}},"script_fields":{"Java Properties":{"script":"import java.lang.*;\nSystem.getProperties();"}}}, _=1064940098}
java.lang.IllegalStateException: source and source_content_type parameters are required
	at org.elasticsearch.rest.RestRequest.contentOrSourceParam(RestRequest.java:491) ~[elasticsearch-7.12.0.jar:7.12.0]
	at org.elasticsearch.rest.RestRequest.withContentOrSourceParamParserOrNull(RestRequest.java:465) ~[elasticsearch-7.12.0.jar:7.12.0]
	at org.elasticsearch.rest.action.search.RestSearchAction.prepareRequest(RestSearchAction.java:112) ~[elasticsearch-7.12.0.jar:7.12.0]
	at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:83) ~[elasticsearch-7.12.0.jar:7.12.0]
	at org.elasticsearch.xpack.security.rest.SecurityRestFilter.handleRequest(SecurityRestFilter.java:91) ~[?:?]
	at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:247) [elasticsearch-7.12.0.jar:7.12.0]
	at org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:329) [elasticsearch-7.12.0.jar:7.12.0]
	at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:180) [elasticsearch-7.12.0.jar:7.12.0]
	at org.elasticsearch.http.AbstractHttpServerTransport.dispatchRequest(AbstractHttpServerTransport.java:325) [elasticsearch-7.12.0.jar:7.12.0]
	at org.elasticsearch.http.AbstractHttpServerTransport.handleIncomingRequest(AbstractHttpServerTransport.java:390) [elasticsearch-7.12.0.jar:7.12.0]
	at org.elasticsearch.http.AbstractHttpServerTransport.incomingRequest(AbstractHttpServerTransport.java:307) [elasticsearch-7.12.0.jar:7.12.0]
	at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:31) [transport-netty4-client-7.12.0.jar:7.12.0]
	at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:17) [transport-netty4-client-7.12.0.jar:7.12.0]
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at org.elasticsearch.http.netty4.Netty4HttpPipeliningHandler.channelRead(Netty4HttpPipeliningHandler.java:47) [transport-netty4-client-7.12.0.jar:7.12.0]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) [netty-handler-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:832) [?:?]

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.