Kibana server is not ready yet

Kibana server is not ready yet

kibana

Hi @maycoonferreira Welcome to the community!

Perhaps do a quick search on this forum for that topic, many questions and responses.

It generally means Kibana can not communicate Elasticsearch

Also you will need to provide much more detail if you would like help.

You would need to tell us
What version?
How you installed?
Provide the elasticsearch.yml
Provide kibana.yml
Did you verify elasticsearch is running?
Did you look at the Kibana logs?
Is elas

These two are most important parts, ES and Kib logs, not journalctl.

Hi Stephen

Thank you very much for your return.

The Elastic version is:

Version: 7.10.2, Build: default/deb/747e1cc71def077253878a59143c1f785afa92b9/2021-01-13T00:42:12.435326Z, JVM: 15.0.1


I couldn't view the Kibana version, it gave an error.

root@host1:/usr/share/kibana/bin# /usr/share/kibana/bin/kibana --version
fs.js:114
throw err;
^

Error: ENOENT: no such file or directory, open '/usr/share/kibana/config/kibana.yml'
at Object.openSync (fs.js:443:3)
at Object.readFileSync (fs.js:343:35)
at readYaml (/usr/share/kibana/node_modules/ @ kbn/apm-config-loader/target/utils/read_config.js:27:52)
at Object.exports.getConfigFromFiles (/usr/share/kibana/node_modules/ @ kbn/apm-config-loader/target/utils/read_config.js:52:22)
at exports.loadConfiguration (/usr/share/kibana/node_modules/ @ kbn/apm-config-loader/target/config_loader.js:33:38)
at module.exports (/usr/share/kibana/src/apm.js:47:15)
at Object. (/usr/share/kibana/src/cli/dist.js:21:18)
at Module._compile (internal/modules/cjs/loader.js:778:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)
at Module.load (internal/modules/cjs/loader.js:653:32)
root@host1:


The installation was done by another person, who left the company

I believe she used the Guide:


Yes, Elastic is running.

root@host1:/etc/systemd/system# sudo service elasticsearch status
● elasticsearch.service - Elasticsearch
Loaded: loaded (/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2023-11-10 09:40:51 -03; 4s ago
Docs:
Main PID: 2633 (java)
Tasks: 49 (limit: 4915)
Memory: 4.4G
CGroup: /system.slice/elasticsearch.service
├─2633 /usr/share/elasticsearch/jdk/bin/java -Xshare:auto -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPr
└─2811 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller

Nov 10 09:40:23 host1 systemd[1]: Starting Elasticsearch...
Nov 10 09:40:51 host1 systemd[1]: Started Elasticsearch.

root@host1:/etc/systemd/system#


I'm sending Kibana logs.

Nov 09 16:29:34 host1 kibana[442]: {"type":"log","@ timestamp":"2023-11-09T19:29:34Z","tags":["error","elasticsearch","data"],"pid":442,"message":"[search_phase_execution_exception]: all shards failed"}
Nov 09 16:29:34 host1 kibana[442]: {"type":"log","@ timestamp":"2023-11-09T19:29:34Z","tags":["warning","savedobjects-service"],"pid":442,"message":"Unable to connect to Elasticsearch. Error: search_phase_execution_exception"}
Nov 09 16:29:37 host1 kibana[442]: {"type":"log","@ timestamp":"2023-11-09T19:29:37Z","tags":["error","elasticsearch","data"],"pid":442,"message":"[search_phase_execution_exception]: all shards failed"}
Nov 09 16:29:39 host1 kibana[442]: {"type":"log","@ timestamp":"2023-11-09T19:29:39Z","tags":["error","elasticsearch","data"],"pid":442,"message":"[search_phase_execution_exception]: all shards failed"}
Nov 09 16:30:33 host1 kibana[442]: {"type":"log","@ timestamp":"2023-11-09T19:30:33Z","tags":["error","elasticsearch","monitoring"],"pid":442,"message":"Request error, retrying\nGET https : // 192.168.50.21 : 9200/_xpack?accept_enterprise=true => connect ECONNREFUSED 192.168.50.21 :9200"}
Nov 09 16:30:34 host1 kibana[442]: {"type":"log","@ timestamp":"2023-11-09T19:30:34Z","tags":["error","elasticsearch","data"],"pid":442,"message":"[search_phase_execution_exception]: all shards failed"}
Nov 09 16:30:37 host1 kibana[442]: {"type":"log","@ timestamp":"2023-11-09T19:30:37Z","tags":["error","elasticsearch","data"],"pid":442,"message":"[search_phase_execution_exception]: all shards failed"}


etc-kibana-kibana.yml

  - https:// IP1 :9200
  - https:// IP2 :9200
  - https:// IP3 :9200

elasticsearch.username: "kibana_system"
elasticsearch.password: "Password"

server.ssl.enabled: true

elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/elasticsearch_ca.pem" ]


elasticsearch.ssl.verificationMode: certificate

etc-elasticsearch-elasticsearch.yml

#
#
cluster.name: cluster-elk
#
#
#
node.name: HOST1
#
#
#node.attr.rack: r1
#

path.data: /var/lib/elasticsearch
#
#
path.logs: /var/log/elasticsearch
#
#
#bootstrap.memory_lock: true
#

#

network.host: IP_HOST1
#

#
http.port: 9200
#

#

#
discovery.seed_hosts: ["host1", "host2", "host3"]
#

#
cluster.initial_master_nodes: ["host1", "host2", "host3"]
#

#

#gateway.recover_after_nodes: 3
#

#

#
#action.destructive_requires_name: true
node.master: true
node.ingest: false
node.data: false

#xpack.security.enabled: true
#xpack.security.http.ssl.enabled: true
#xpack.security.transport.ssl.enabled: true
#xpack.security.http.ssl.key: /etc/elasticsearch/certs/host1.key
#xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/host1.crt
#xpack.security.http.ssl.certificate_authorities: /etc/elasticsearch/certs/ca.crt
#xpack.security.transport.ssl.key: /etc/elasticsearch/certs/host1.key
#xpack.security.transport.ssl.certificate: /etc/elasticsearch/certs/host1.crt
#xpack.security.transport.ssl.certificate_authorities: /etc/elasticsearch/certs/ca.crt

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/certs/host1.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/certs/host1.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: /etc/elasticsearch/https.p12

Hi @maycoonferreira

Welll 7.10.2 is ancient ... years old ... out of support etc., and there are many things to check with the error messages above I suspect the Cluster may be in a bad position and there may be a number of issues

The first thing we want to check is the actually connecting to elasticsearch you will need the elastic username and password

curl -k -v -u elastic https://elasticip:9200

and

curl -k -v -u elastic https://elasticip:9200/_cat/health?v

Show us your output.. Also please take the time to format code output put 3 backtick before and after each code / log block etc.

root@glb-elkmtr-01:/home/bs4it# curl -k -v -u mpf-adm https : // 192.168.50.21 :9200
Enter host password for user 'mpf-adm':

Expire in 0 ms for 6 (transfer 0x557f43c33fb0)

Trying 192.168.50.21...

TCP_NODELAY set

Expire in 200 ms for 4 (transfer 0x557f43c33fb0)

Connected to 192.168.50.21 (192.168.50.21) port 9200 (#0)

ALPN, offering h2

ALPN, offering http/1.1

successfully set certificate verify locations:

CAfile: none
CApath: /etc/ssl/certs

TLSv1.3 (OUT), TLS handshake, Client hello (1):

TLSv1.3 (IN), TLS handshake, Server hello (2):

TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):

TLSv1.3 (IN), TLS handshake, Certificate (11):

TLSv1.3 (IN), TLS handshake, CERT verify (15):

TLSv1.3 (IN), TLS handshake, Finished (20):

TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):

TLSv1.3 (OUT), TLS handshake, Finished (20):

SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384

ALPN, server did not agree to a protocol

Server certificate:

subject: DC=intra; CN=global

start date: May 11 14:26:31 2021 GMT

expire date: May 11 14:26:31 2026 GMT

issuer: CN=Elastic Certificate Tool Autogenerated CA

SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway.

Server auth using Basic with user 'mpf-adm'

TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 537
<
{
"name" : "glb-elkmtr-01",
"cluster_name" : "cluster-elk",
"cluster_uuid" : "8RH1tLJ2TRmdMFNKxlSCpg",
"version" : {
"number" : "7.10.2",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "747e1cc71def077253878a59143c1f785afa92b9",
"build_date" : "2021-01-13T00:42:12.435326Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

Connection #0 to host 192.168.50.21 left intact
root@glb-elkmtr-01:/home/bs4it# curl -k -v -u mpf-adm https: // 192.168.50.21 : 9200/_cat/health?v
Enter host password for user 'mpf-adm':

Expire in 0 ms for 6 (transfer 0x55f37a4cdfb0)

Trying 192.168.50.21...

TCP_NODELAY set

Expire in 200 ms for 4 (transfer 0x55f37a4cdfb0)

Connected to 192.168.50.21 (192.168.50.21) port 9200 (#0)

ALPN, offering h2

ALPN, offering http/1.1

successfully set certificate verify locations:

CAfile: none
CApath: /etc/ssl/certs

TLSv1.3 (OUT), TLS handshake, Client hello (1):

TLSv1.3 (IN), TLS handshake, Server hello (2):

TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):

TLSv1.3 (IN), TLS handshake, Certificate (11):

TLSv1.3 (IN), TLS handshake, CERT verify (15):

TLSv1.3 (IN), TLS handshake, Finished (20):

TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):

TLSv1.3 (OUT), TLS handshake, Finished (20):

SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384

ALPN, server did not agree to a protocol

Server certificate:

subject: DC=intra; CN=global

start date: May 11 14:26:31 2021 GMT

expire date: May 11 14:26:31 2026 GMT

issuer: CN=Elastic Certificate Tool Autogenerated CA

SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway.

Server auth using Basic with user 'mpf-adm'

TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
< HTTP/1.1 200 OK
< content-type: text/plain; charset=UTF-8
< content-length: 292
<
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1699639877 18:11:17 cluster-elk red 5 2 1756 878 0 0 2 0 - 99.9%

Connection #0 to host 192.168.50.21 left intact
root@glb-elkmtr-01:/home/bs4it#

Ok lets run a few more commands we do not need -v any more
Your Cluster is Red (which is not good, it is non functional) let see if we can figure out why.

curl -k -u elastic https://elasticip:9200/_cat/nodes/?v&h=name,du,dt,dup,hp,hc,rm,rp,r

curl -k -u elastic https://elasticip:9200/_cluster/health

Remember to put 3 Backticks before and after your results to format "```"

root@glb-elkmtr-01:/home/bs4it# curl -k -u mpf-adm https :// 192.168.50.21 : 9200/_cat/nodes/?v&h=name,du,dt,dup,hp,hc,rm,rp,r
[2] 7321


root@glb-elkmtr-01:/home/bs4it# curl -k -u mpf-adm https : // 192.168.50.21 :9200/_cluster/health
Enter host password for user 'mpf-adm':

{"cluster_name":"cluster-elk","status":"red","timed_out":false,"number_of_nodes":5,"number_of_data_nodes":2,"active_primary_shards":898,"active_shards":1796,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":2,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":99.88876529477196}

[2]+  Stopped                 curl -k -u mpf-adm https : // 192.168.50.21 : 9200/_cat/nodes/?v
root@glb-elkmtr-01:/home/bs4it#

Hi Stephen

Thank you very much for your return.

Could it be a problem related to disk space?

Because we always had to delete old indexes to free up space.

Yes It is probably disk space. That's what the first command was supposed to show but I realized I didn't format it properly for you

Please try this again note the "s

curl -k -u elastic "https://elasticip:9200/_cat/nodes/?v&h=name,du,dt,dup,hp,hc,rm,rp,r"
root@glb-elkmtr-01:/home/bs4it# curl -k -u mpf-adm "https : // 192.168.50.21 : 9200/_cat/nodes/?v&h=name,du,dt,dup,hp,hc,rm,rp,r"
Enter host password for user 'mpf-adm':
name               du      dt   dup hp      hc     rm rp r
glb-elkmtr-02   8.3gb  40.5gb 20.58 12 516.6mb  7.7gb 65 lmr
glb-elkdat-01 120.5gb 799.6gb 15.08 53   4.2gb 31.4gb 67 cdhilrstw
glb-elkdat-02 120.5gb 799.6gb 15.08 34   2.7gb 31.4gb 74 cdhlrstw
glb-elkmtr-01   8.2gb  40.5gb 20.26 57   2.3gb  7.7gb 70 lmr
glb-elkmtr-03   8.2gb  40.5gb 20.28 31   1.2gb  7.7gb 67 lmr
root@glb-elkmtr-01:/home/bs4it#

Please format your code... please take the 30sec to format your code.
Please click the pencil icon on this post and see what I added
3 Backticks before and after your code


So the Good news bad news is that you are not out of space so there is something else going on

you will need to start to learn more about elastic

This

You need to run this command

curl -k -u elastic "https://elasticip:9200/_cat/shards?v&h=index,shard,prirep,state,unassigned.reason"

This will return many lines but you need to find the the lines that are not status STARTED

You basically have some bad indices / shards..

from the health above
"unassigned_shards":2, that is not good but maynot be what is preventing Kibana from

Also in your kibana.yml (and sorry since you only provide poorly formatted snippets it is hard for me to tell)

Can you try

elasticsearch.username: "elastic"
elasticsearch.password: "easticpassword"
elasticsearch.ssl.verificationMode: none

the two lines that are not started are these.

index shard prirep state unassigned.reason
.kibana_task_manager_1 0 p UNASSIGNED ALLOCATION_FAILED
.kibana_task_manager_1 0 r UNASSIGNED REPLICA_ADDED

Stop Kibana...

Lets try this, This tells elastic to try to allocate the shards...

curl -k -X POST -u elastic "https://elasticip:9200/_cluster/reroute?retry_failed=true&pretty"

Then run this again and see if we are green

curl -k -u elastic https://elasticip:9200/_cluster/health

If it is not green then we need you to run this to see exactly why it is failing...

curl -X POST -k -H "Content-Type: application/json" -u elastic https://elasticip:9200/_cluster/allocation/explain?pretty -d'{"index" : ".kibana_task_manager_1", "primary": true, "shard": 0 }'

Hang in there we can probably get this fixed...

Stephen Brown

You saved my skin

It worked out. turned green

And the web access worked successfully again

1 Like

Awesome! can you log in through Kibana...

Now you need to do some learning :slight_smile: ... plus some point upgrade.. you are waaaaayyyy behind!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.