Elastic search cluster health is always red

Elastic search version - 5.6.14 with kibana is installed using MPack - elasticsearch_mpack-0.7.1.0.tar.gz in centos7.

Below is the elasticsearch.yml configuration

cluster:
name: metron
routing:
allocation.node_concurrent_recoveries: 4
allocation.disk.watermark.low: .97
allocation.disk.threshold_enabled: true
allocation.disk.watermark.high: 0.99

discovery:
zen:
ping:
unicast:
hosts: ["10.101.10.1"]

node:
data: true
master: true
name: node1
path:
data: "/opt/lmm/es_data"

http:
port: 9200-9300
cors.enabled: "false"

transport:
tcp:
port: 9300-9400

gateway:
recover_after_data_nodes: 3
recover_after_time: 15m
expected_data_nodes: 0

Indexing Performance Tips | Elasticsearch: The Definitive Guide [2.x] | Elastic

indices:
store.throttle.type: none
memory:
index_buffer_size: 10%
fielddata:
cache.size: 25%

bootstrap:
memory_lock: true
system_call_filter: false

thread_pool:
bulk:
queue_size: 3000
index:
queue_size: 1000

discovery.zen.ping_timeout: 5s
discovery.zen.fd.ping_interval: 15s
discovery.zen.fd.ping_timeout: 60s
discovery.zen.fd.ping_retries: 5
discovery.zen.minimum_master_nodes: 1

network.host: ["10.101.10.1' ]
network.publish_host:

Kibana.yml

Kibana is served by a back end server. This controls which port to use.

server.port: 5000

The host to bind the server to.

Kibana (like Elasticsearch) now binds to localhost for security purposes instead of 0.0.0.0 (all addresses). Previous binding to 0.0.0.0 also caused issues for Windows users.

server.host: 10.101.10.1

I am getting below exception when i tried with the url - http://10.101.10.1:9200/_cat/indices?v

{"error":{"root_cause":[{"type":"cluster_block_exception","reason":"blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];"}],"type":"cluster_block_exception","reason":"blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];"},"status":503}

URL -http://10.101.10.1:9200/_cat/health?v

epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1554375934 16:35:34 metron red 1 1 0 0 0 0 0 0 - NaN%

Here is a blog post about what to do to investigate a red cluster:

It'd be best to start with these suggestions; if you have more specific questions then please feel free to ask.

thanks for the link. Issue is resolved after changing the below properties

gateway:
recover_after_data_nodes: 1
recover_after_time: 15m
expected_data_nodes: 1

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.