Changing node roles in an already existing ES environment

Good morning everybody.

As, due to my job, I will have to get a Production ES environment in the future, I decided to create a testing ES environment at home.

This environment has three multi-role node. The three of them are master nodes, and as I didn't change any of the role option in their respectives Elasticsearch.yml files, I guess that all of them are data nodes aswell, (they all store shards and documents), they all are also cordinatoors and ingest nodes.

I though that this was the best approach as I couldn't count with much RAM and storage, but soon I will have extra hardware resources, so I'm recosidering to convert my three multi-role servers into masternodes dedicated server. I will create also at least a couple of data nodes aswell.

The point here is that I've been using and setting up filebeat and metric beat, which means that I've already uploaded some index templates and created ingesting index to/in those master servers. My indices are rotating on daily basis and they get removed when they are older than seven days. So I guess that once I've got my data dedicated servers the indices will be created in the dataservers as soon as my masterserver stop being dataservers too. Also, after a week, those indices that remained at that point in the dedicated master servers will dissappear.

But there are other indices which purpose are unknown to me and they make me worry.

If I go to Kibana -> Index Management I can find the following indices:
.items-default-000001
.lists-default-000001
metrics-endpoint.metadata_current_default

The following 'Data Streams':
filebeat-8.2.1
metricbeat-8.2.1

The following 'Index templates':
ecs-logstash
filebeat-8.2.1
ilm-history
logs
logs-endpoint.alerts
logs-endpoint.events.file
logs-endpoint.events.library
logs-endpoint.events.network
logs-endpoint.events.process
logs-endpoint.events.registry
logs-endpoint.events.security
metricbeat-8.2.1
metrics
metrics-endpoint.metadata
metrics-endpoint.metrics
metrics-endpoint.policy
metrics-metadata-current
metrics-metadata-united
synthetics

And also a good number of Component templates.

So that are my goals. Could please someone advise me about what should I have into consideration?. I'm sure I'm missing key things.

Thank you in advance.

Carlos T.

Nodes, not servers, but yes that summary is correct.

These are system configs and indices, you don't need to worry about them as they are automatically managed.

Thank you very much Warkolm.

I've already included two datanodes. From the very beginning ES started to move shards from the three masternodes to the datanodes. Now I have changed ILM settings so the older indices get removed.

I don't have much time now for my home environment, but I intend to remove the datanode role gradually (one by one) from the three masternodes. So far everything is going as you said.

So thank you again and long life to the dark side of the force.

So. Finally I took the time for investigating and trying to progress.
After applying ILM my beats indices dissapaired from the multirole nodes and started to get created in the datanodes. At that point I've reconfigured the three multiroles nodes to be just masternodes:

node.roles: [ master ]

At that point the environment failed when I tried to start it reporting that a node that contains shards can't stop being a datanode. And the thing is that beat indices are just a part of the whole index set.
So what I've done is transfering the rest of shards from my multirole nodes with IP's: 111, 112 and 113 to my Datanodes with IP's 115 and 116

So I went to the 'Dev Tools' console and run:

PUT _cluster/settings

{

"persistent" : {

"cluster.routing.allocation.require._ip" : ["192.168.0.115", "192.168.0.116"]

}

}

PUT _cluster/settings

{

"persistent" : {

"cluster.routing.allocation.exclude._ip" : ["192.168.0.111", "192.168.0.112","192.168.0.113"]

}

}

Some minutes later 100% of indices had moved into the datanodes. Not just the beat's ones like before.

And since I did that I could change the multirole nodes into simply masternodes.
I restarted the 5 nodes and everything seems to work fine. Dashboards, HA, ingestions, etc...

Thanks all for your help.

Carlos T.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.