Kibana monitoring shows only 1 Logstash instance

Hi,

I am using ELK GA 6.3.0. I have enabled X-Pack monitoring to monitor my Logstash. I have 2 Logstash instances, which has same configuration, consuming from a Kafka topic having 11 partitions. The config is like below.

input {
	kafka{
		group_id => "group1"
		topics => ["topic1"]
		bootstrap_servers => "192.168.0.1:9092,192.168.0.2:9092,192.168.0.3:9092"
		consumer_threads => "6"
		codec => "json"
	}
}

I have enabled X-Pack monitoring in logstash.yml file, and it is like below;

node 1

node.name: logstash_0.10
config.reload.automatic: true
config.reload.interval: 30s
http.host: "192.168.0.10"
http.port: 9601
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: ["http://192.168.0.1:9210/","http://192.168.0.2:9210/","http://192.168.0.3:9210/"]
xpack.monitoring.collection.interval: 10s
xpack.monitoring.collection.pipeline.details.enabled: true
xpack.management.enabled: false
xpack.management.pipeline.id: ["main", "apache_logs"]
xpack.management.elasticsearch.url: ["http://192.168.0.1:9210/","http://192.168.0.2:9210/","http://192.168.0.3:9210/"]
xpack.management.logstash.poll_interval: 5s

node 2

node.name: logstash_0.11
config.reload.automatic: true
config.reload.interval: 30s
http.host: "192.168.0.11"
http.port: 9601
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: ["http://192.168.0.1:9210/","http://192.168.0.2:9210/","http://192.168.0.3:9210/"]
xpack.monitoring.collection.interval: 10s
xpack.monitoring.collection.pipeline.details.enabled: true
xpack.management.enabled: false
xpack.management.pipeline.id: ["main", "apache_logs"]
xpack.management.elasticsearch.url: ["http://192.168.0.1:9210/","http://192.168.0.2:9210/","http://192.168.0.3:9210/"]
xpack.management.logstash.poll_interval: 5s

As you can see, the only difference b/w these configs are in node.name and http.host. But when I open my Kibana monitoring page, it shows only one instance like below;

upl

But I have confirmed both Logstash instances are running, and is visible in my Kafka Manager. In my Kafka manager, I can see that both consumers are consuming. In the Nodes tab of Logstash monitoring page of Kibana monitoring plugin page, sometimes the node name will be logstash_0.10, and after some time, after reloading the page, it will change to logstash_0.11

But why only one Logstash node name, at a time, in Monitoring UI?

Thanks.

There is a data/ folder in your Logstash install folder. In that folder there is a uuid file. Can you check the contents of this file for both your Logstash instances?

@shaunak the uuid looks same for both instances a0b34ad3-7d2b-4c59-8e8d-c4367bd61d91

Okay, that would explain why you are seeing only one node in the Monitoring UI. The UUID uniquely identifies a node so two nodes with the same UUID will be seen as one.

The solution is to simply stop one of your two Logstash nodes, delete the data/uuid file for that node and then restart the node. Logstash will recreate the data/uuid file automatically upon restart, and store a new, unique UUID in it. Then, if you look in the Monitoring UI, you should start seeing two Logstash instances.

2 Likes

Alright. So Logstash will create the uuid file if its not present. Lemme ask you something. Is it ok, if I manually assign a uuid? For example, if i modify a0b34ad3-7d2b-4c59-8e8d-c4367bd61d91 to a0b34ad3-7d2b-4c59-8e8d-logstash1 , will it work? so that I can manually assign unique uuid across my cluster like a0b34ad3-7d2b-4c59-8e8d-logstash1, a0b34ad3-7d2b-4c59-8e8d-logstash2 etc.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.