Failed to start Kibana with HTTPS

Hi
I have a cluster with two nodes and two nginx
I configured ssl tls and https between 2nodes and kibana, but unfortunately my kibana does not open :frowning_face:

I followed step by step with the bellow link:

This is my instance.yml for each node:

#add the instance information to yml file
instances:
  - name: 'elk1'
    dns: [ 'node1.elastic.test.com' ]
  - name: "elk2"
    dns: [ 'node2.elastic.test.com' ]
  - name: 'my-kibana'
    dns: [ 'kibana.local' ]
  - name: 'logstash'
    dns: [ 'logstash.local' ]

Also my /etc/hosts is:

172.22.34.36 node1.elastic.test.com node1 elk1
172.22.34.37 node2.elastic.test.com node1 elk2
127.0.0.1 kibana.local logstash.local

My certs contain:

drwxr-xr-x. 2 root root 32 May  3 09:00 ca
drwxr-xr-x. 2 root root 36 May  3 09:00 elk1
drwxr-xr-x. 2 root root 36 May  3 09:00 elk2
drwxr-xr-x. 2 root root 44 May  3 09:00 logstash
drwxr-xr-x. 2 root root 46 May  3 09:00 my-kibana

This is my elasticsearch.yml of elk1:

cluster.name: logServer
node.name: elk1
node.master: true
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: node1.elastic.test.com
http.port: 9200
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.key: certs/elk1.key
xpack.security.http.ssl.certificate: certs/elk1.crt
xpack.security.http.ssl.certificate_authorities: certs/ca.crt
xpack.security.transport.ssl.key: certs/elk1.key
xpack.security.transport.ssl.certificate: certs/elk1.crt
xpack.security.transport.ssl.certificate_authorities: certs/ca.crt
discovery.seed_hosts: [ "node1.elastic.test.com","node2.elastic.test.com" ]
cluster.initial_master_nodes: [ "elk1" ]

And this is the elasticsearch.yml of elk2:

cluster.name: logServer
node.name: elk2
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: node2.elastic.test.com
http.port: 9200
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.key: certs/elk2.key
xpack.security.http.ssl.certificate: certs/elk2.crt
xpack.security.http.ssl.certificate_authorities: certs/ca.crt
xpack.security.transport.ssl.key: certs/elk2.key
xpack.security.transport.ssl.certificate: certs/elk2.crt
xpack.security.transport.ssl.certificate_authorities: certs/ca.crt
discovery.seed_hosts: [ "node1.elastic.test.com","node2.elastic.test.com" ]

I generated CA and server certificates with the bellow command:

bin/elasticsearch-certutil cert --keep-ca-key --pem --in ~/tmp/cert_blog/instance.yml --out ~/tmp/cert_blog/certs.zip

And after that, set built-in user password:

bin/elasticsearch-setup-passwords auto -u "https://node1.elastic.test.com:9200"

I've also enabled TLS for kibana too, and this is my kibana.yml that is on elk1 server:

server.port: 5601
server.host: "kibana.local"
server.name: "my-kibana"
server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/config/certs/my-kibana.crt
server.ssl.key: /etc/kibana/config/certs/my-kibana.key
elasticsearch.hosts: ["https://node1.elastic.test.com:9200","https://node2.elastic.test.com:9200"]
elasticsearch.username: "kibana"
elasticsearch.password: "RvRWiTcWaHQyxPT771oZ"
elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/config/certs/ca.crt" ]

Everything seems to be ok, and the result of netstat -tnlp on elk1 is:

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      13042/master
tcp        0      0 127.0.0.1:5601          0.0.0.0:*               LISTEN      1074/node
tcp        0      0 0.0.0.0:22022           0.0.0.0:*               LISTEN      12798/sshd
tcp6       0      0 ::1:25                  :::*                    LISTEN      13042/master
tcp6       0      0 127.0.0.1:9600          :::*                    LISTEN      1073/java
tcp6       0      0 :::22022                :::*                    LISTEN      12798/sshd
tcp6       0      0 172.22.34.36:9200       :::*                    LISTEN      12801/java
tcp6       0      0 172.22.34.36:9300       :::*                    LISTEN      12801/java
tcp6       0      0 :::5044                 :::*                    LISTEN      1073/java
tcp6       0      0 :::5045                 :::*                    LISTEN      1073/java

I also configured nginx.conf for each nginx node:

events {
    worker_connections 1024;
}
http {
    upstream kibana {
                server 172.22.34.36:5601;
    }
    server {
        listen 0.0.0.0:80;
        server_name kibana.local;
        error_log   /var/log/nginx/kibana.error.log;
        access_log  /var/log/nginx/kibana.access.log;
        return 301 https://kibana.local$request_uri;

        location / {
            rewrite ^/(.*) /$1 break;
            proxy_ignore_client_abort on;
            proxy_pass http://kibana;
            proxy_set_header  X-Real-IP  $remote_addr;
            proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header  Host $http_host;
        }
    }
    upstream kibanaredirect {
        server 172.22.34.36:5601;
        }

After all, I can not still access to kibana and my cluster...
Where is the problem?

This is my elk2 logs:
elasticsearch:

[2021-05-03T12:14:10,855][DEBUG][o.e.a.s.TransportSearchAction] [elk2] All shards failed for phase: [query]
org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<transport_request>] would be [1012720866/965.8mb], which is larger than the limit of [986932838/941.2mb], real usage: [1012717144/965.8mb], new bytes reserved: [3722/3.6kb], usages [request=0/0b, fielddata=675/675b, in_flight_requests=3722/3.6kb, accounting=60919602/58mb]
        at org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService.checkParentLimit(HierarchyCircuitBreakerService.java:343) ~[elasticsearch-7.6.2.jar:7.6.2]
        at org.elasticsearch.common.breaker.ChildMemoryCircuitBreaker.addEstimateBytesAndMaybeBreak(ChildMemoryCircuitBreaker.java:128) ~[elasticsearch-7.6.2.jar:7.6.2]
        at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:171) [elasticsearch-7.6.2.jar:7.6.2]
        at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:119) [elasticsearch-7.6.2.jar:7.6.2]
        at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:103) [elasticsearch-7.6.2.jar:7.6.2]
        at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:667) [elasticsearch-7.6.2.jar:7.6.2]
        at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:62) [transport-netty4-client-7.6.2.jar:7.6.2]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:326) [netty-codec-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:300) [netty-codec-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) [netty-handler-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1478) [netty-handler-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1227) [netty-handler-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1274) [netty-handler-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:503) [netty-codec-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:442) [netty-codec-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:281) [netty-codec-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:600) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:554) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1050) [netty-common-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.43.Final.jar:4.1.43.Final]
        at java.lang.Thread.run(Thread.java:830) [?:?]
[2021-05-03T12:14:22,854][DEBUG][o.e.a.s.TransportSearchAction] [elk2] [.kibana_task_manager_2][0], node[veBpqUZrSIC9pX1SNrTvug], [P], s[STARTED], a[id=DNKjR5yyQ_aZlfhz7W5gmA]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[.kibana_task_manager], indicesOptions=IndicesOptions[ignore_unavailable=true, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_aliases_to_multiple_indices=true, forbid_closed_indices=true, ignore_aliases=false, ignore_throttled=true], types=[], routing='null', preference='null', requestCache=null, scroll=Scroll{keepAlive=5m}, maxConcurrentShardRequests=0, batchedReduceSize=512, preFilterShardSize=128, allowPartialSearchResults=false, localClusterAlias=null, getOrCreateAbsoluteStartMillis=-1, ccsMinimizeRoundtrips=true, source={"size":1000,"query":{"bool":{"must":[{"term":{"type":{"value":"task","boost":1.0}}},{"bool":{"must":[{"bool":{"should":[{"bool":{"must":[{"term":{"task.status":{"value":"idle","boost":1.0}}},{"range":{"task.runAt":{"from":null,"to":"now","include_lower":true,"include_upper":true,"boost":1.0}}}],"adjust_pure_negative":true,"boost":1.0}},{"bool":{"must":[{"bool":{"should":[{"term":{"task.status":{"value":"running","boost":1.0}}},{"term":{"task.status":{"value":"claiming","boost":1.0}}}],"adjust_pure_negative":true,"boost":1.0}},{"range":{"task.retryAt":{"from":null,"to":"now","include_lower":true,"include_upper":true,"boost":1.0}}}],"adjust_pure_negative":true,"boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}},{"bool":{"should":[{"exists":{"field":"task.schedule","boost":1.0}},{"bool":{"must":[{"term":{"task.taskType":{"value":"actions:.server-log","boost":1.0}}},{"range":{"task.attempts":{"from":null,"to":1,"include_lower":true,"include_upper":false,"boost":1.0}}}],"adjust_pure_negative":true,"boost":1.0}},{"bool":{"must":[{"term":{"task.taskType":{"value":"actions:.slack","boost":1.0}}},{"range":{"task.attempts":{"from":null,"to":1,"include_lower":true,"include_upper":false,"boost":1.0}}}],"adjust_pure_negative":true,"boost":1.0}},{"bool":{"must":[{"term":{"task.taskType":{"value":"actions:.email","boost":1.0}}},{"range":{"task.attempts":{"from":null,"to":1,"include_lower":true,"include_upper":false,"boost":1.0}}}],"adjust_pure_negative":true,"boost":1.0}},{"bool":{"must":[{"term":{"task.taskType":{"value":"actions:.index","boost":1.0}}},{"range":{"task.attempts":{"from":null,"to":1,"include_lower":true,"include_upper":false,"boost":1.0}}}],"adjust_pure_negative":true,"boost":1.0}},{"bool":{"must":[{"term":{"task.taskType":{"value":"actions:.pagerduty","boost":1.0}}},{"range":{"task.attempts":{"from":null,"to":1,"include_lower":true,"include_upper":false,"boost":1.0}}}],"adjust_pure_negative":true,"boost":1.0}},{"bool":{"must":[{"term":{"task.taskType":{"value":"actions:.webhook","boost":1.0}}},{"range":{"task.attempts":{"from":null,"to":1,"include_lower":true,"include_upper":false,"boost":1.0}}}],"adjust_pure_negative":true,"boost":1.0}},{"bool":{"must":[{"term":{"task.taskType":{"value":"alerting:siem.signals","boost":1.0}}},{"range":{"task.attempts":{"from":null,"to":3,"include_lower":true,"include_upper":false,"boost":1.0}}}],"adjust_pure_negative":true,"boost":1.0}},{"bool":{"must":[{"term":{"task.taskType":{"value":"vis_telemetry","boost":1.0}}},{"range":{"task.attempts":{"from":null,"to":3,"include_lower":true,"include_upper":false,"boost":1.0}}}],"adjust_pure_negative":true,"boost":1.0}},{"bool":{"must":[{"term":{"task.taskType":{"value":"lens_telemetry","boost":1.0}}},{"range":{"task.attempts":{"from":null,"to":3,"include_lower":true,"include_upper":false,"boost":1.0}}}],"adjust_pure_negative":true,"boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}},"version":false,"seq_no_primary_term":true,"sort":[{"_script":{"script":{"source":"\nif (doc['task.retryAt'].size()!=0) {\n  return doc['task.retryAt'].value.toInstant().toEpochMilli();\n}\nif (doc['task.runAt'].size()!=0) {\n  return doc['task.runAt'].value.toInstant().toEpochMilli();\n}\n    ","lang":"painless"},"type":"number","order":"asc"}}]}}]
org.elasticsearch.transport.RemoteTransportException: [elk1][172.22.34.36:9300][indices:data/read/search[phase/query]]
Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<transport_request>] would be [1005425218/958.8mb], which is larger than the limit of [986932838/941.2mb], real usage: [1005421496/958.8mb], new bytes reserved: [3722/3.6kb], usages [request=0/0b, fielddata=675/675b, in_flight_requests=3722/3.6kb, accounting=60919602/58mb]
        at org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService.checkParentLimit(HierarchyCircuitBreakerService.java:343) ~[elasticsearch-7.6.2.jar:7.6.2]
        at org.elasticsearch.common.breaker.ChildMemoryCircuitBreaker.addEstimateBytesAndMaybeBreak(ChildMemoryCircuitBreaker.java:128) ~[elasticsearch-7.6.2.jar:7.6.2]
        at org.elasticsearch.transport.InboundHandler.handleRequest(InboundHandler.java:171) [elasticsearch-7.6.2.jar:7.6.2]
        at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:119) [elasticsearch-7.6.2.jar:7.6.2]
        at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:103) [elasticsearch-7.6.2.jar:7.6.2]
        at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:667) [elasticsearch-7.6.2.jar:7.6.2]
        at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:62) [transport-netty4-client-7.6.2.jar:7.6.2]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:326) [netty-codec-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:300) [netty-codec-4.1.43.Final.jar:4.1.43.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.43.Final.jar:4.1.43.Final]

Can you show your Kibana log?

This is my kibana.stderr log:


 FATAL  Error: Port 5601 is already in use. Another instance of Kibana may be running!

Could not create APM Agent configuration: Too Many Requests

 FATAL  [circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [999832712/953.5mb], which is larger than the limit of [986932838/941.2mb], real usage: [999832712/953.5mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=82222/80.2kb, in_flight_requests=0/0b, accounting=62595094/59.6mb], with { bytes_wanted=999832712 & bytes_limit=986932838 & durability="PERMANENT" } :: {"path":"/.kibana","query":{},"statusCode":429,"response":"{\"error\":{\"root_cause\":[{\"type\":\"circuit_breaking_exception\",\"reason\":\"[parent] Data too large, data for [<http_request>] would be [999832712/953.5mb], which is larger than the limit of [986932838/941.2mb], real usage: [999832712/953.5mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=82222/80.2kb, in_flight_requests=0/0b, accounting=62595094/59.6mb]\",\"bytes_wanted\":999832712,\"bytes_limit\":986932838,\"durability\":\"PERMANENT\"}],\"type\":\"circuit_breaking_exception\",\"reason\":\"[parent] Data too large, data for [<http_request>] would be [999832712/953.5mb], which is larger than the limit of [986932838/941.2mb], real usage: [999832712/953.5mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=82222/80.2kb, in_flight_requests=0/0b, accounting=62595094/59.6mb]\",\"bytes_wanted\":999832712,\"bytes_limit\":986932838,\"durability\":\"PERMANENT\"},\"status\":429}"}


 FATAL  Error: [config validation of [elasticsearch].hosts]: types that failed validation:
- [config validation of [elasticsearch].hosts.0]: expected value of type [string] but got [Array].
- [config validation of [elasticsearch].hosts.1.1]: expected URI with scheme [http|https] but got [172.22.34.37:9200].

Could not create APM Agent configuration: Authentication Exception

 FATAL  [circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [1028411128/980.7mb], which is larger than the limit of [986932838/941.2mb], real usage: [1028411128/980.7mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=61167271/58.3mb], with { bytes_wanted=1028411128 & bytes_limit=986932838 & durability="PERMANENT" } :: {"path":"/.kibana_task_manager","query":{},"statusCode":429,"response":"{\"error\":{\"root_cause\":[{\"type\":\"circuit_breaking_exception\",\"reason\":\"[parent] Data too large, data for [<http_request>] would be [1028411128/980.7mb], which is larger than the limit of [986932838/941.2mb], real usage: [1028411128/980.7mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=61167271/58.3mb]\",\"bytes_wanted\":1028411128,\"bytes_limit\":986932838,\"durability\":\"PERMANENT\"}],\"type\":\"circuit_breaking_exception\",\"reason\":\"[parent] Data too large, data for [<http_request>] would be [1028411128/980.7mb], which is larger than the limit of [986932838/941.2mb], real usage: [1028411128/980.7mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=61167271/58.3mb]\",\"bytes_wanted\":1028411128,\"bytes_limit\":986932838,\"durability\":\"PERMANENT\"},\"status\":429}"}


 FATAL  [circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [990446488/944.5mb], which is larger than the limit of [986932838/941.2mb], real usage: [990446488/944.5mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=61167271/58.3mb], with { bytes_wanted=990446488 & bytes_limit=986932838 & durability="PERMANENT" } :: {"path":"/.kibana","query":{},"statusCode":429,"response":"{\"error\":{\"root_cause\":[{\"type\":\"circuit_breaking_exception\",\"reason\":\"[parent] Data too large, data for [<http_request>] would be [990446488/944.5mb], which is larger than the limit of [986932838/941.2mb], real usage: [990446488/944.5mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=61167271/58.3mb]\",\"bytes_wanted\":990446488,\"bytes_limit\":986932838,\"durability\":\"PERMANENT\"}],\"type\":\"circuit_breaking_exception\",\"reason\":\"[parent] Data too large, data for [<http_request>] would be [990446488/944.5mb], which is larger than the limit of [986932838/941.2mb], real usage: [990446488/944.5mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=61167271/58.3mb]\",\"bytes_wanted\":990446488,\"bytes_limit\":986932838,\"durability\":\"PERMANENT\"},\"status\":429}"}


 FATAL  [circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [998707688/952.4mb], which is larger than the limit of [986932838/941.2mb], real usage: [998707688/952.4mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=61167271/58.3mb], with { bytes_wanted=998707688 & bytes_limit=986932838 & durability="PERMANENT" } :: {"path":"/.kibana","query":{},"statusCode":429,"response":"{\"error\":{\"root_cause\":[{\"type\":\"circuit_breaking_exception\",\"reason\":\"[parent] Data too large, data for [<http_request>] would be [998707688/952.4mb], which is larger than the limit of [986932838/941.2mb], real usage: [998707688/952.4mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=61167271/58.3mb]\",\"bytes_wanted\":998707688,\"bytes_limit\":986932838,\"durability\":\"PERMANENT\"}],\"type\":\"circuit_breaking_exception\",\"reason\":\"[parent] Data too large, data for [<http_request>] would be [998707688/952.4mb], which is larger than the limit of [986932838/941.2mb], real usage: [998707688/952.4mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=61167271/58.3mb]\",\"bytes_wanted\":998707688,\"bytes_limit\":986932838,\"durability\":\"PERMANENT\"},\"status\":429}"}

Could not create APM Agent configuration: Too Many Requests

 FATAL  Error: Cluster client cannot be used after it has been closed.

Did you fix that?

and this is my kibana.stdout log:

{"type":"log","@timestamp":"2021-05-03T06:42:16Z","tags":["error","plugins","taskManager","taskManager"],"pid":6938,"message":"Failed to poll for work: [circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [1017257372/970.1mb], which is larger than the limit of [986932838/941.2mb], real usage: [1017253488/970.1mb], new bytes reserved: [3884/3.7kb], usages [request=0/0b, fielddata=1112/1kb, in_flight_requests=3884/3.7kb, accounting=60560937/57.7mb], with { bytes_wanted=1017257372 & bytes_limit=986932838 & durability=\"PERMANENT\" } :: {\"path\":\"/.kibana_task_manager/_update_by_query\",\"query\":{\"ignore_unavailable\":true,\"refresh\":true,\"max_docs\":10,\"conflicts\":\"proceed\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"type\\\":\\\"task\\\"}},{\\\"bool\\\":{\\\"must\\\":[{\\\"bool\\\":{\\\"should\\\":[{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.status\\\":\\\"idle\\\"}},{\\\"range\\\":{\\\"task.runAt\\\":{\\\"lte\\\":\\\"now\\\"}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"bool\\\":{\\\"should\\\":[{\\\"term\\\":{\\\"task.status\\\":\\\"running\\\"}},{\\\"term\\\":{\\\"task.status\\\":\\\"claiming\\\"}}]}},{\\\"range\\\":{\\\"task.retryAt\\\":{\\\"lte\\\":\\\"now\\\"}}}]}}]}},{\\\"bool\\\":{\\\"should\\\":[{\\\"exists\\\":{\\\"field\\\":\\\"task.schedule\\\"}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"actions:.server-log\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":1}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"actions:.slack\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":1}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"actions:.email\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":1}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"actions:.index\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":1}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"actions:.pagerduty\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":1}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"actions:.webhook\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":1}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"alerting:siem.signals\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":3}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"vis_telemetry\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":3}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"lens_telemetry\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":3}}}]}}]}}]}}]}},\\\"sort\\\":{\\\"_script\\\":{\\\"type\\\":\\\"number\\\",\\\"order\\\":\\\"asc\\\",\\\"script\\\":{\\\"lang\\\":\\\"painless\\\",\\\"source\\\":\\\"\\\\nif (doc['task.retryAt'].size()!=0) {\\\\n  return doc['task.retryAt'].value.toInstant().toEpochMilli();\\\\n}\\\\nif (doc['task.runAt'].size()!=0) {\\\\n  return doc['task.runAt'].value.toInstant().toEpochMilli();\\\\n}\\\\n    \\\"}}},\\\"seq_no_primary_term\\\":true,\\\"script\\\":{\\\"source\\\":\\\"ctx._source.task.ownerId=params.ownerId; ctx._source.task.status=params.status; ctx._source.task.retryAt=params.retryAt;\\\",\\\"lang\\\":\\\"painless\\\",\\\"params\\\":{\\\"ownerId\\\":\\\"kibana:0965a50f-d06c-4240-90ab-d5d17fe29d86\\\",\\\"status\\\":\\\"claiming\\\",\\\"retryAt\\\":\\\"2021-05-03T06:42:46.004Z\\\"}}}\",\"statusCode\":429,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"circuit_breaking_exception\\\",\\\"reason\\\":\\\"[parent] Data too large, data for [<http_request>] would be [1017257372/970.1mb], which is larger than the limit of [986932838/941.2mb], real usage: [1017253488/970.1mb], new bytes reserved: [3884/3.7kb], usages [request=0/0b, fielddata=1112/1kb, in_flight_requests=3884/3.7kb, accounting=60560937/57.7mb]\\\",\\\"bytes_wanted\\\":1017257372,\\\"bytes_limit\\\":986932838,\\\"durability\\\":\\\"PERMANENT\\\"}],\\\"type\\\":\\\"circuit_breaking_exception\\\",\\\"reason\\\":\\\"[parent] Data too large, data for [<http_request>] would be [1017257372/970.1mb], which is larger than the limit of [986932838/941.2mb], real usage: [1017253488/970.1mb], new bytes reserved: [3884/3.7kb], usages [request=0/0b, fielddata=1112/1kb, in_flight_requests=3884/3.7kb, accounting=60560937/57.7mb]\\\",\\\"bytes_wanted\\\":1017257372,\\\"bytes_limit\\\":986932838,\\\"durability\\\":\\\"PERMANENT\\\"},\\\"status\\\":429}\"}"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","cli","config"],"pid":6938,"message":"Reloading logging configuration due to SIGHUP."}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","config"],"pid":6938,"message":"New logging configuration:\n{\n  \"ops\": {\n    \"interval\": 5000\n  },\n  \"logging\": {\n    \"silent\": false,\n    \"quiet\": false,\n    \"verbose\": false,\n    \"events\": {},\n    \"dest\": \"stdout\",\n    \"filter\": {},\n    \"json\": true,\n    \"timezone\": \"UTC\",\n    \"rotate\": {\n      \"enabled\": false,\n      \"everyBytes\": 10485760,\n      \"keepFiles\": 7,\n      \"pollingInterval\": 10000,\n      \"usePolling\": false\n    }\n  }\n}"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","cli","config"],"pid":6938,"message":"Reloaded logging configuration due to SIGHUP."}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","monitoring","kibana-monitoring"],"pid":6938,"message":"Re-initializing Kibana Monitoring due to SIGHUP"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins-system"],"pid":6938,"message":"Stopping all plugins."}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","bfetch"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","graph"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","apm"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","cloud"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","spaces"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","home"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","data"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","share"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","translations"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","apm_oss"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","canvas"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","metrics"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","usageCollection"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","security"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","features"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","timelion"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","code"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","encryptedSavedObjects"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","infra"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","licensing"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","siem"],"pid":6938,"message":"Stopping plugin"}
{"type":"log","@timestamp":"2021-05-03T06:42:29Z","tags":["info","plugins","taskManager"],"pid":6938,"message":"Stopping plugin"}

That suggest something is asking Kibana to stop.

What is your heap size for your Elasticsearch nodes?

How can I increase the heap size?!?

Yes I used to start kibana once systemctl and then wrote down service kibana start
I think I have fixed it

I found it, in jvm.option that was 1G and I changed it to 4G...

After running of 1 instance of kibana and also solving elasticsearch heap size, this is my kibana.stdout log:

{"type":"log","@timestamp":"2021-05-03T08:09:59Z","tags":["status","plugin:snapshot_restore@7.6.2","info"],"pid":2915,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2021-05-03T08:09:59Z","tags":["status","plugin:input_control_vis@7.6.2","info"],"pid":2915,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2021-05-03T08:09:59Z","tags":["status","plugin:kibana_react@7.6.2","info"],"pid":2915,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2021-05-03T08:09:59Z","tags":["status","plugin:management@7.6.2","info"],"pid":2915,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2021-05-03T08:09:59Z","tags":["status","plugin:navigation@7.6.2","info"],"pid":2915,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2021-05-03T08:09:59Z","tags":["status","plugin:region_map@7.6.2","info"],"pid":2915,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2021-05-03T08:09:59Z","tags":["status","plugin:telemetry@7.6.2","info"],"pid":2915,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2021-05-03T08:09:59Z","tags":["status","plugin:timelion@7.6.2","info"],"pid":2915,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2021-05-03T08:09:59Z","tags":["status","plugin:ui_metric@7.6.2","info"],"pid":2915,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2021-05-03T08:09:59Z","tags":["status","plugin:markdown_vis@7.6.2","info"],"pid":2915,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2021-05-03T08:09:59Z","tags":["status","plugin:metric_vis@7.6.2","info"],"pid":2915,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2021-05-03T08:09:59Z","tags":["status","plugin:table_vis@7.6.2","info"],"pid":2915,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2021-05-03T08:09:59Z","tags":["status","plugin:tagcloud@7.6.2","info"],"pid":2915,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2021-05-03T08:09:59Z","tags":["status","plugin:vega@7.6.2","info"],"pid":2915,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2021-05-03T08:09:59Z","tags":["reporting","browser-driver","warning"],"pid":2915,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}
{"type":"log","@timestamp":"2021-05-03T08:10:00Z","tags":["reporting","warning"],"pid":2915,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
{"type":"log","@timestamp":"2021-05-03T08:10:00Z","tags":["status","plugin:reporting@7.6.2","info"],"pid":2915,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2021-05-03T08:10:00Z","tags":["listening","info"],"pid":2915,"message":"Server running at https://kibana.local:5601"}
{"type":"log","@timestamp":"2021-05-03T08:10:00Z","tags":["info","http","server","Kibana"],"pid":2915,"message":"http server running at https://kibana.local:5601"}

This is the latest log of kibana.stdout :

{"type":"log","@timestamp":"2021-05-03T08:26:29Z","tags":["listening","info"],"pid":8638,"message":"Server running at https://kibana.local:5601"}
{"type":"log","@timestamp":"2021-05-03T08:26:29Z","tags":["info","http","server","Kibana"],"pid":8638,"message":"http server running at https://kibana.local:5601"}
{"type":"log","@timestamp":"2021-05-03T08:27:48Z","tags":["warning","monitoring","kibana-monitoring"],"pid":8638,"message":"Error: [cluster_block_exception] blocked by: [SERVICE_UNAVAILABLE/2/no master];\n    at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:349:15)\n    at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:306:7)\n    at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)\n    at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4929:19)\n    at IncomingMessage.emit (events.js:203:15)\n    at endReadableNT (_stream_readable.js:1145:12)\n    at process._tickCallback (internal/process/next_tick.js:63:19)"}
{"type":"log","@timestamp":"2021-05-03T08:27:48Z","tags":["warning","monitoring","kibana-monitoring"],"pid":8638,"message":"Unable to bulk upload the stats payload to the local cluster"}
{"type":"log","@timestamp":"2021-05-03T08:28:08Z","tags":["warning","monitoring","kibana-monitoring"],"pid":8638,"message":"Error: [cluster_block_exception] blocked by: [SERVICE_UNAVAILABLE/2/no master];\n    at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:349:15)\n    at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:306:7)\n    at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)\n    at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4929:19)\n    at IncomingMessage.emit (events.js:203:15)\n    at endReadableNT (_stream_readable.js:1145:12)\n    at process._tickCallback (internal/process/next_tick.js:63:19)"}
{"type":"log","@timestamp":"2021-05-03T08:28:08Z","tags":["warning","monitoring","kibana-monitoring"],"pid":8638,"message":"Unable to bulk upload the stats payload to the local cluster"}
{"type":"log","@timestamp":"2021-05-03T08:28:28Z","tags":["warning","monitoring","kibana-monitoring"],"pid":8638,"message":"Error: [cluster_block_exception] blocked by: [SERVICE_UNAVAILABLE/2/no master];\n    at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:349:15)\n    at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:306:7)\n    at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)\n    at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4929:19)\n    at IncomingMessage.emit (events.js:203:15)\n    at endReadableNT (_stream_readable.js:1145:12)\n    at process._tickCallback (internal/process/next_tick.js:63:19)"}
{"type":"log","@timestamp":"2021-05-03T08:28:28Z","tags":["warning","monitoring","kibana-monitoring"],"pid":8638,"message":"Unable to bulk upload the stats payload to the local cluster"}
{"type":"log","@timestamp":"2021-05-03T08:28:48Z","tags":["warning","monitoring","kibana-monitoring"],"pid":8638,"message":"Error: [cluster_block_exception] blocked by: [SERVICE_UNAVAILABLE/2/no master];\n    at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:349:15)\n    at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:306:7)\n    at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)\n    at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4929:19)\n    at IncomingMessage.emit (events.js:203:15)\n    at endReadableNT (_stream_readable.js:1145:12)\n    at process._tickCallback (internal/process/next_tick.js:63:19)"}
{"type":"log","@timestamp":"2021-05-03T08:28:48Z","tags":["warning","monitoring","kibana-monitoring"],"pid":8638,"message":"Unable to bulk upload the stats payload to the local cluster"}

That would suggest that there is a problem with Elasticsearch. So what does the logs from it show?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.