Indeces is being created on the master machines

Hi , I'm new in Elastic world , we have a few master nodes , data nodes , logstash nodes and 2 ing nodes , 4 days a go indexes began to be written to the master nodes also , need your advice , what could be the reason and what is the solution ?

Thanks in advance.

Yes this is normal because the nodes are both data although it is master

Thanks for the answer , I forgot to mention that the master nodes are only master (though on the main master node I see only node.master: true without node.data: false, but on the rest I have node.master: true and node.data: false) , can it be the issue ?

Can I have a visibility of the config files of your nodes?

master1

Set the bind address to a specific IP (IPv4 or IPv6):

network.host: [192.168.126.51, local]
network.bind_host: 192.168.126.51

node.master: true

Set a custom port for HTTP:

#http.port: 9200

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when this node is started:

The default list of hosts is ["127.0.0.1", "[::1]"]

discovery.seed_hosts: ["192.168.126.52","192.168.126.53","192.168.126.54","192.168.126.55","192.168.126.56"]

Bootstrap the cluster using an initial set of master-el

cluster.initial_master_nodes: ["192.168.126.55"]

master 2

network.host: [192.168.126.52, local]

node.master: true

node.data: false

#node.ingest: false

Set a custom port for HTTP:

#http.port: 9200

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when this node is started:

The default list of hosts is ["127.0.0.1", "[::1]"]

discovery.seed_hosts: ["192.168.126.51", "192.168.126.53"]

Bootstrap the cluster using an initial set of master-eligible nodes:

cluster.initial_master_nodes: ["192.168.126.55"]

master 3

node.master: true

node.data: false

node.ingest: false

Make sure that the heap size is set to about half the memory available

on the system and that the owner of the process is allowed to use this

limit.

Elasticsearch performs poorly when the system is swapping the memory.

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6):

network.host: [192.168.126.53, local]

Set a custom port for HTTP:

#http.port: 9200

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when this node is started:

The default list of hosts is ["127.0.0.1", "[::1]"]

discovery.seed_hosts: ["192.168.126.52","192.168.126.51","192.168.126.54","192.168.126.55","192.168.126.56"]

Bootstrap the cluster using an initial set of master-eligible nodes:

#cluster.initial_master_nodes: ["node-1", "node-2"]

by the way on the discovery.seed_hosts it shouldnt be the same number of hosts in the 3 masters ? Moreover the logstash and Ingest nodes shouldnt be in the list ?

Thanks in advance.

There is no logstash node. logstash is an instance outside the elasticsearch cluster. And effectively among the roles of an elasticsearch node, ingest node to pre-process documents before the actual document indexing happens. But you can just use logstash.

these are the parameters which allow you to be able to elect a master in the cluster in a subset of the eligible master. So this will be the list of eligible masters

The default for node.data is true. You must explicitly set node.data: false if you do not want a node to be a data node. The full collection of settings for a dedicated master node are in the docs.

Thank you guys a lot .

I added node.data: false to the master elasticsearch.yml file

but the service can't be started , in the log written that -->

"Node is started with node.data=false, but has shard data ...
....
....
Use 'elasticsearch-node repurpose' tool to clean up"

what is the best way to solve this issue

to run "elasticsearch-node repurpose" ?

Thanks in advance.

The simplest thing to do is wipe this node and start again -- if your cluster health is green then this won't lose any data.

This is what I did , but due to shards that were created before , service cant be bring ed up , it tells that first of all delete all indexes on the host(doesn't let me bring service with node.data=false).

Indeed - that's why (if your cluster health is green) I suggest wiping the node, i.e. deleting its whole data path. There are other ways forward if you cannot get your cluster health to green but they're much more complicated and risky.

Thank you .

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.