Data Node Offline on Kibana Dashboard

Greetings everyone, I'm sorry I'm newbie here
I have a single master node, single data node and 1 kibana node in our Dev ENV
I'm trying to monitoring our Elasticsearch Cluster with this guide when I click the red button thing said "Monitor with Metricbeat"
after the master node success I continue to master node but the data node went offline


As you can see, the data node are offline but working fine, and also it got yellow status but it's fine.

This is my master elasticsearch.yml

cat /etc/elasticsearch/elasticsearch.yml
path:
  data: /var/lib/elasticsearch
  logs: /var/log/elasticsearch
cluster:
  name: your-prop-firm
  initial_master_nodes:
    - ypf-master
node:
  name: ypf-master
  roles:
    - master
    - remote_cluster_client
network.host: 0.0.0.0
http.port: 9200
transport.port: 9300
bootstrap.memory_lock: false

xpack.security:
  enabled: true
  http.ssl:
    enabled: true
    certificate: /etc/elasticsearch/certs/certificate.crt
    key: /etc/elasticsearch/certs/private_key.pem

  transport.ssl:
    enabled: true
    certificate: /etc/elasticsearch/certs/certificate.crt
    key: /etc/elasticsearch/certs/private_key.pem
    # verification_mode: certificate
    verification_mode: none

this is my data elasticsearch.yml

cat /etc/elasticsearch/elasticsearch.yml
path:
  data: /mnt/elasticsearch/data
  logs: /mnt/elasticsearch/logs

cluster:
  name: ypg
node:
  name: ypf-data-1
  roles:
    - data
    - remote_cluster_client
network.host: 0.0.0.0
http.port: 9200
transport.port: 9300
discovery.seed_hosts:
  - "master.es.dev.xxx"
bootstrap.memory_lock: false

xpack.security:
  enabled: true
  http.ssl:
    enabled: true
    certificate: /etc/elasticsearch/certs/certificate.crt
    key: /etc/elasticsearch/certs/private_key.pem

  transport.ssl:
    enabled: true
    certificate: /etc/elasticsearch/certs/certificate.crt
    key: /etc/elasticsearch/certs/private_key.pem
    # verification_mode: certificate
    verification_mode: none

And this is my both master and node metricbeat.yml and elasticsearch-xpack.yml

cat metricbeat.yml
metricbeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

output.elasticsearch:
  hosts: ["https://master.es.dev.xxx:9200"]
  protocol: "https"
  username: "monitoring"
  password: "monitoring"
  ssl.enabled: true

setup.kibana:
  host: "https://dashboard.es.dev.xxx:5601"
  username: "monitoring"
  password: "monitoring"

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

.

cat modules.d/elasticsearch-xpack.yml
# Module: elasticsearch
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/8.14/metricbeat-module-elasticsearch.html

- module: elasticsearch
  metricsets:
    - node
    - node_stats
    - index
    - index_recovery
    - enrich
    - ml_job
    - ccr
    - cluster_stats
  xpack.enabled: true
  period: 10s
  hosts: ["https://master.es.dev.xxx:9200"]
  username: "monitoring"
  password: "monitoring"
  #api_key: "foo:bar"

Lastly this is my kibana

cat /etc/kibana/kibana.yml
server:
  port: 5601
  host: "0.0.0.0"
  publicBaseUrl: "https://dashboard.es.dev.xxx:5601"

elasticsearch:
  hosts:
    - https://master.es.dev.xxx:9200
  username: "kibana_system"
  password: "xxx"

server.ssl:
  enabled: true
  certificate: /etc/kibana/certs/certificate.crt
  key: /etc/kibana/certs/private_key.pem

# Add monitoring configurations
monitoring.ui.container.elasticsearch.enabled: true
monitoring.ui.container.logstash.enabled: true
monitoring.ui.ccs.enabled: true

cluster.names are different in data elasticsearch.yml and master elasticsearch.yml which must be same on all nodes that you want to connect in one cluster. It's weird that you can see both nodes in the same cluster (your-prop-firm) cluster. Probably, you changed cluster.name in data.yml after the installation.

Make sure you are using the same cluster name on all nodes .

No it's same,
Sorry I have typo
Good news, after I added xpack.monitrong things, now it go online, but the Roles on the data node are missing, here my updated elasticsearch.yml

Master :

cat /etc/elasticsearch/elasticsearch.yml
path:
  data: /var/lib/elasticsearch
  logs: /var/log/elasticsearch
cluster:
  name: your-prop-firm
  initial_master_nodes:
    - ypf-master
node:
  name: ypf-master
  roles:
    - master
    - remote_cluster_client
network.host: 0.0.0.0
http.port: 9200
transport.port: 9300
bootstrap.memory_lock: false

xpack.security:
  enabled: true
  http.ssl:
    enabled: true
    certificate: /etc/elasticsearch/certs/certificate.crt
    key: /etc/elasticsearch/certs/private_key.pem

  transport.ssl:
    enabled: true
    certificate: /etc/elasticsearch/certs/certificate.crt
    key: /etc/elasticsearch/certs/private_key.pem
    # verification_mode: certificate
    verification_mode: none

xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true
xpack.monitoring.elasticsearch.collection.enabled: true

Data :

cat /etc/elasticsearch/elasticsearch.yml
path:
  data: /mnt/elasticsearch/data
  logs: /mnt/elasticsearch/logs

cluster:
  name: your-prop-firm
node:
  name: ypf-data-1
  roles:
    - data
    - remote_cluster_client
network.host: 0.0.0.0
http.port: 9200
transport.port: 9300
discovery.seed_hosts:
  - "master.es.dev.xxx"
bootstrap.memory_lock: false

xpack.security:
  enabled: true
  http.ssl:
    enabled: true
    certificate: /etc/elasticsearch/certs/certificate.crt
    key: /etc/elasticsearch/certs/private_key.pem

  transport.ssl:
    enabled: true
    certificate: /etc/elasticsearch/certs/certificate.crt
    key: /etc/elasticsearch/certs/private_key.pem
    # verification_mode: certificate
    verification_mode: none

xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true
xpack.monitoring.elasticsearch.collection.enabled: true

So how to make this normal, shows up the Data Node Roles ?

Happy to hear it works! If the node roles flapping to N/A the following article can help. Elastic Support Hub