Elastic search cluster showing unhealthy, Cluster "red"

Hello !
I am new to Elasticsearch! This is the first time i am doing installation, if someone can pls help, have spent days searching for the issue but no luck ..

We have kibana running inside EKS cluster and Elastic-search hosted on EC-2 instance. VPC peering is established using transit gateway. as both are different VPC

After installing Elasticsearch on EC-2 instance via Ansible , elasticsearch cluster is showing "red",
and data nodes are not able to find the master node and shards are not getting allocated. Kibana is not able to connect to master node. i am using Elasticsearch version: 7.17.0 and kibana version 7.17.8

Have opened 9200 port for http and 9300 port internode communication

Here is my script for Elasticsearch implementation. ( during installation i was getting wait issue and finally it got timed out, so i started elasticsearch on master and date node manually. ( can anyone help me with this aswell) i am using Redhat centos rhel fedora image on EC-2 instance

---
- hosts: es-master
  become: yes
  roles:
   - role: elastic.elasticsearch
  vars:
    oss_version: false
    es_data_dirs:
      - "/data/elasticsearch/data"
    es_log_dir: "/data/elasticsearch/logs"
    es_java_install: true
    es_heap_size: "4g"
    es_config:
      cluster.name: "es-master"
      network.host: '10.90.40.5'
      cluster.initial_master_nodes: '10.90.40.5:9300'
      discovery.seed_hosts: "10.90.40.5:9300,10.90.47.153:9300"
      http.port: 9200
      node.data: false
      node.master: true
      node.ingest: false
      bootstrap.memory_lock: false
    es_plugins:
     - plugin: ingest-attachment

- hosts: es-data
  become: yes
  roles:
    - role: elastic.elasticsearch
  vars:
    oss_version: false
    es_data_dirs:
      - "/var/lib/elasticsearch"
    es_log_dir: "/data/elasticsearch/logs"
    es_java_install: true
    es_heap_size: "4g"
    es_config:
      cluster.name: "es-data"
      network.host: 10.90.47.153
      cluster.initial_master_nodes: '10.90.40.5:9300'
      discovery.seed_hosts: "10.90.40.5:9300,10.90.47.153:9300"
      http.port: 9200
      node.data: true
      node.master: false
      bootstrap.memory_lock: false
    es_plugins:
      - plugin: ingest-attachment
----------------
Health result
{
  "cluster_name" : "es-master",
  "status" : "red",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 0,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 3,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 0.0
}
------

One thing that stands out is that you have 2 different cluster names. All nodes in a single cluster must have the same cluster name, so these 2 nodes will never be able to connect.

@Christian_Dahlqvist It's basically a two separate node have taken one as master and other as data node and have given them the name as mentioned, if not what you suggest and what changes do you recommend can you please elaborate so that I can move accordingly..

Have you changed the cluster name to be the same for both nodes? What does the logs show on startup?

@Christian_Dahlqvist let me change the cluster name for both the nodes to a common name ,and will confirm back !
Thanks !

@Christian_Dahlqvist Cluster came up and went green.! Thanks !

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.