Kibana errors

Hello, can you plz help me troubleshoot these Kibana errors:

Feb 03 19:49:32 prozokiba01.pinpointtech.com systemd[1]: Started Kibana.
Feb 03 19:49:42 prozokiba01.pinpointtech.com kibana[562]: {"type":"log","@timestamp":"2022-02-03T19:49:42Z","tags":["error","Elasticsearch","admin"],"pid":562,"message":"Request error, retrying\nHEAD http://localhost:9200/ => connect EC... 127.0.0.1:9200"}
Feb 03 19:49:43 prozokiba01.pinpointtech.com kibana[562]: {"type":"log","@timestamp":"2022-02-03T19:49:43Z","tags":["status","plugin:Elasticsearch@5.6.13","error"],"pid":562,"state":"red","message":"Status changed from yellow to red - U...r Elasticsearch"}
Feb 03 19:49:43 prozokiba01.pinpointtech.com kibana[562]: {"type":"log","@timestamp":"2022-02-03T19:49:43Z","tags":["listening","info"],"pid":562,"message":"Server running at http://localhost:5601"}
Feb 03 19:49:43 prozokiba01.pinpointtech.com kibana[562]: {"type":"log","@timestamp":"2022-02-03T19:49:43Z","tags":["status","ui settings","error"],"pid":562,"state":"red","message":"Status changed from uninitialized to red - Elasticsea...:"uninitialized"}
Feb 03 19:50:43 prozokiba01.pinpointtech.com kibana[562]: {"type":"log","@timestamp":"2022-02-03T19:50:43Z","tags":["status","plugin:Elasticsearch@5.6.13","error"],"pid":562,"state":"red","message":"Status changed from red to red - Serv...localhost:9200."}
Hint: Some lines were ellipsized, use -l to show in full.

The Elasticsearch service is running fine

Cerebro is also not running, getting following error:

[root@prozokiba01 ~]# systemctl status cerebro
● cerebro.service - Cerebro
Loaded: loaded (/etc/systemd/system/cerebro.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2022-02-03 19:49:32 UTC; 1h 10min ago
Main PID: 560 (java)
CGroup: /system.slice/cerebro.service
└─560 java -Duser.dir=/opt/cerebro-0.8.1 -cp -jar /opt/cerebro-0.8.1/lib/cerebro.cerebro-0.8.1-launcher.jar

Feb 03 19:50:16 prozokiba01.pinpointtech.com cerebro[560]: at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
Feb 03 19:50:16 prozokiba01.pinpointtech.com cerebro[560]: at play.shaded.ahc.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:259)
Feb 03 19:50:16 prozokiba01.pinpointtech.com cerebro[560]: at play.shaded.ahc.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:291)
Feb 03 19:50:16 prozokiba01.pinpointtech.com cerebro[560]: at play.shaded.ahc.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:634)
Feb 03 19:50:16 prozokiba01.pinpointtech.com cerebro[560]: Caused by: java.net.ConnectException: Connection refused
Feb 03 19:50:16 prozokiba01.pinpointtech.com cerebro[560]: at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
Feb 03 19:50:16 prozokiba01.pinpointtech.com cerebro[560]: at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
Feb 03 19:50:16 prozokiba01.pinpointtech.com cerebro[560]: at play.shaded.ahc.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:259)
Feb 03 19:50:16 prozokiba01.pinpointtech.com cerebro[560]: at play.shaded.ahc.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:291)
Feb 03 19:50:16 prozokiba01.pinpointtech.com cerebro[560]: at play.shaded.ahc.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:634)

[root@prozokiba01 ~]# curl localhost:9200/_cluster/health?pretty
{
"error" : {
"root_cause" : [
{
"type" : "master_not_discovered_exception",
"reason" : null
}
],
"type" : "master_not_discovered_exception",
"reason" : null
},
"status" : 503
}

Hi ,
Can you please share the Elasticsearch.yml ?
I think you will need to mention initial master nodes in your Elasticsearch.yml for cluster formation and bootstrapping.


cluster.initial_master_nodes:
   - master-node-1
   - master-node-2

Regards,
Subhankar.

Can you plz tell me where I can find this file?
I know the ES service is running

root@prozokiba01 ~]# systemctl status Elasticsearch
● Elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/Elasticsearch.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2022-02-03 21:13:27 UTC; 26min ago
Docs: http://www.elastic.co
Process: 996 ExecStartPre=/usr/share/Elasticsearch/bin/Elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 1008 (java)
CGroup: /system.slice/Elasticsearch.service
└─1008 /bin/java -Xms12g -Xmx12g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -Djdk.io.perm...

Feb 03 21:13:27 prozokiba01.pinpointtech.com systemd[1]: Starting Elasticsearch...
Feb 03 21:13:27 prozokiba01.pinpointtech.com systemd[1]: Started Elasticsearch.

I found it.

[root@prozokiba01 Elasticsearch]# ls
Elasticsearch.yml Elasticsearch.yml.bak Elasticsearch.yml.rpmnew jvm.options log4j2.properties scripts
[root@prozokiba01 Elasticsearch]# vi Elasticsearch.yml

3. You want this node to be neither master nor data node, but

to act as a "search load balancer" (fetching data from nodes,

aggregating results, etc.)

node.master: false
node.data: false
node.ingest: false

----------------------------------- Paths ------------------------------------

Path to directory where to store the data (separate multiple locations by comma):

#path.data: /path/to/data

Path to log files:

#path.logs: /path/to/logs

----------------------------------- Memory -----------------------------------

Lock the memory on startup:

#bootstrap.memory_lock: true

Make sure that the heap size is set to about half the memory available

on the system and that the owner of the process is allowed to use this

limit.

Elasticsearch performs poorly when the system is swapping the memory.

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6):

network.host: 0.0.0.0

Set a custom port for HTTP:

#http.port: 9200

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when new node is started:

The default list of hosts is ["127.0.0.1", "[::1]"]

discovery.zen.ping.unicast.hosts: ["10.100.40.68", "10.100.40.69", "10.100.40.70", "10.100.40.71", "10.100.40.72"]

Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):

#discovery.zen.minimum_master_nodes: 3

For more information, consult the zen discovery module documentation.

---------------------------------- Gateway -----------------------------------

Block initial recovery after a full cluster restart until N nodes are started:

#gateway.recover_after_nodes: 3

For more information, consult the gateway module documentation.

---------------------------------- Various -----------------------------------

Require explicit names when deleting indices:

#action.destructive_requires_name: true

Testing recovery time

transport.tcp.connect_timeout: 2s

I found this elasticserach.ymal file as as well on one the data node

[root@labzostic02 Elasticsearch]# vi Elasticsearch.yml
cluster.name: labstic

------------------------------------ Node ------------------------------------

Use a descriptive name for the node:

node.name: ${HOSTNAME}

Add custom attributes to the node:

#node.attr.rack: r1

----------------------------------- Paths ------------------------------------

Path to directory where to store the data (separate multiple locations by comma):

path.data: /data/Elasticsearch

Path to log files:

#path.logs: /path/to/logs

----------------------------------- Memory -----------------------------------

Lock the memory on startup:

#bootstrap.memory_lock: true

Make sure that the heap size is set to about half the memory available

on the system and that the owner of the process is allowed to use this

limit.

Elasticsearch performs poorly when the system is swapping the memory.

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6):

network.host: 0.0.0.0

Set a custom port for HTTP:

#http.port: 9200

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when new node is started:

The default list of hosts is ["127.0.0.1", "[::1]"]

discovery.zen.ping.unicast.hosts: ["10.100.80.111", "10.100.80.112"]

Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):

#discovery.zen.minimum_master_nodes: 3

For more information, consult the zen discovery module documentation.

---------------------------------- Gateway -----------------------------------

Block initial recovery after a full cluster restart until N nodes are started:

#gateway.recover_after_nodes: 3

For more information, consult the gateway module documentation.

---------------------------------- Various -----------------------------------

Require explicit names when deleting indices:

#action.destructive_requires_name: true

Hi ,

Please format the yaml code , kindly read the standards for the forum.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.