Can we have 1 master only node and 2 data only node in a 3 nodes cluster

I am using elasticsearch 7.3, I have 3 node in total, i try to have to 1 master only node and 2 data only nodes. But this configuration is not working, but when i also configure other node as master node then it start working.

node-1 yml

cluster.name: ElasticsearchStaging
node.name: node-1
node.master: false
node.data: true

node-2 yml

cluster.name: ElasticsearchStaging
node.name: node-2
node.master: false
node.data: true

node-3 yml

cluster.name: ElasticsearchStaging
node.name: node-3
node.master: true
node.data: false

when i set this config its not working.

also my kibana is not wokring.

kibana.log

{"type":"log","@timestamp":"2019-09-02T10:31:29Z","tags":["warning","stats-collection"],"pid":8291,"message":"Unable to fetch data from spaces collector"}
{"type":"log","@timestamp":"2019-09-02T10:31:30Z","tags":["error","task_manager"],"pid":8291,"message":"Failed to poll for work: [search_phase_execution_exception] all shards 
failed :: {\"path\":\"/.kibana_task_manager/_search\",\"query\":{\"ignore_unavailable\":true},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"type\\\
":\\\"task\\\"}},{\\\"bool\\\":{\\\"must\\\":[{\\\"terms\\\":{\\\"task.taskType\\\":[\\\"maps_telemetry\\\",\\\"vis_telemetry\\\",\\\"actions:.server-log\\\",\\\"actions:.slac
k\\\",\\\"actions:.email\\\"]}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lte\\\":3}}},{\\\"range\\\":{\\\"task.runAt\\\":{\\\"lte\\\":\\\"now\\\"}}},{\\\"range\\\":{\\\"kiba
na.apiVersion\\\":{\\\"lte\\\":1}}}]}}]}},\\\"size\\\":10,\\\"sort\\\":{\\\"task.runAt\\\":{\\\"order\\\":\\\"asc\\\"}},\\\"seq_no_primary_term\\\":true}\",\"statusCode\":503,
\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"query\\\"
,\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}"}
{"type":"log","@timestamp":"2019-09-02T10:31:31Z","tags":["status","plugin:spaces@7.3.0","error"],"pid":8291,"state":"red","message":"Status changed from yellow to red - all s
hards failed: [search_phase_execution_exception] all shards failed","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2019-09-02T10:31:31Z","tags":["fatal","root"],"pid":8291,"message":"{ [search_phase_execution_exception] all shards failed :: {\"path\":\"/.kibana/
_count\",\"query\":{},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"should\\\":[{\\\"bool\\\":{\\\"must\\\":[{\\\"exists\\\":{\\\"field\\\":\\\"graph-workspace\\\"}},{\\\"bool\
\\":{\\\"must_not\\\":{\\\"term\\\":{\\\"migrationVersion.graph-workspace\\\":\\\"7.0.0\\\"}}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"exists\\\":{\\\"field\\\":\\\"space\\\"}},
{\\\"bool\\\":{\\\"must_not\\\":{\\\"term\\\":{\\\"migrationVersion.space\\\":\\\"6.6.0\\\"}}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"exists\\\":{\\\"field\\\":\\\"map\\\"}},{\
\\"bool\\\":{\\\"must_not\\\":{\\\"term\\\":{\\\"migrationVersion.map\\\":\\\"7.2.0\\\"}}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"exists\\\":{\\\"field\\\":\\\"canvas-workpad\\
\"}},{\\\"bool\\\":{\\\"must_not\\\":{\\\"term\\\":{\\\"migrationVersion.canvas-workpad\\\":\\\"7.0.0\\\"}}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"exists\\\":{\\\"field\\\":\\
\"index-pattern\\\"}},{\\\"bool\\\":{\\\"must_not\\\":{\\\"term\\\":{\\\"migrationVersion.index-pattern\\\":\\\"6.5.0\\\"}}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"exists\\\":{
\\\"field\\\":\\\"visualization\\\"}},{\\\"bool\\\":{\\\"must_not\\\":{\\\"term\\\":{\\\"migrationVersion.visualization\\\":\\\"7.3.0\\\"}}}}]}},{\\\"bool\\\":{\\\"must\\\":[{
\\\"exists\\\":{\\\"field\\\":\\\"dashboard\\\"}},{\\\"bool\\\":{\\\"must_not\\\":{\\\"term\\\":{\\\"migrationVersion.dashboard\\\":\\\"7.3.0\\\"}}}}]}},{\\\"bool\\\":{\\\"mus
t\\\":[{\\\"exists\\\":{\\\"field\\\":\\\"search\\\"}},{\\\"bool\\\":{\\\"must_not\\\":{\\\"term\\\":{\\\"migrationVersion.search\\\":\\\"7.0.0\\\"}}}}]}}]}}}\",\"statusCode\"
:503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[],\\\"type\\\":\\\"search_phase_execution_exception\\\",\\\"reason\\\":\\\"all shards failed\\\",\\\"phase\\\":\\\"quer
y\\\",\\\"grouped\\\":true,\\\"failed_shards\\\":[]},\\\"status\\\":503}\"}\n    at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:315:15)\n    at 
checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:274:7)\n    at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/s
rc/lib/connectors/http.js:166:7)\n    at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4929:19)\n    at IncomingMessage.e
mit (events.js:194:15)\n    at endReadableNT (_stream_readable.js:1103:12)\n    at process._tickCallback (internal/process/next_tick.js:63:19)\n  status: 503,\n  displayName: 
'ServiceUnavailable',\n  message:\n   'all shards failed: [search_phase_execution_exception] all shards failed',\n  path: '/.kibana/_count',\n  query: {},\n  body:\n   { error
:\n      { root_cause: [],\n        type: 'search_phase_execution_exception',\n        reason: 'all shards failed',\n        phase: 'query',\n        grouped: true,\n        f
ailed_shards: [] },\n     status: 503 },\n  statusCode: 503,\n  response:\n   '{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all sha
rds failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}',\n  toString: [Function],\n  toJSON: [Function],\n  isBoom: true,\n  isServer: true,\n
  data: null,\n  output:\n   { statusCode: 503,\n     payload:\n      { message:\n         'all shards failed: [search_phase_execution_exception] all shards failed',\n        
statusCode: 503,\n        error: 'Service Unavailable' },\n     headers: {} },\n  reformat: [Function],\n  [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/esUnavail
able' }"}

Yes, it is possible to have only a dedicated master node and two data nodes in your ES cluster. You must have done some mistakes while configuring the ES cluster .
Configure these options in your yml file.

discovery.seed_hosts: ["master_node_ip" ]
cluster.initial_master_nodes: ["master_node_ip" ]
also provide a same cluster name across all nodes then you will be good to go.
at last restart your service and check the cluster state, goto https://master_node_ip:9200/_cluster/state

cluster.initial_master_nodes should normally be set to the node.name of the master node(s), not their IP address(es). It's technically possible to use either, but using the IP address(es) seems to cause a good deal of confusion so the node name is the recommended practice.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.