I tried to reduce number of shards in ES 6.5 version from default(5) to 1 using
'PUT _template/temp_new1
{
"index_patterns" : ["test1-*"],
"settings" : {
"number_of_shards" : 1,
"number_of_replicas" : 1
}
}'
It worked in single node cluster not in 3-node cluster.
How can i reduce number of shards in my 3 node cluster
warkolm
(Mark Walkom)
August 20, 2020, 6:06am
2
Welcome to our community!
What do you mean by didn't work?
number of shards did not change for newly created index.
warkolm
(Mark Walkom)
August 20, 2020, 6:25am
4
Can you show the output of _cat/indices/test*?v
for us?
warkolm:
_cat/indices/test*?v
from 3 node cluster
'test1-02 2 p STARTED 0 261b x.x.x.x node-1
test1-02 2 r UNASSIGNED
test1-02 4 p STARTED 0 261b x.x.x.x node-1
test1-02 4 r UNASSIGNED
test1-02 3 p STARTED 0 261b x.x.x.x node-1
test1-02 3 r UNASSIGNED
test1-02 1 p STARTED 0 261b x.x.x.x node-1
test1-02 1 r UNASSIGNED
test1-02 0 p STARTED 0 261b x.x.x.xnode-1
test1-02 0 r UNASSIGNED '
warkolm
(Mark Walkom)
August 21, 2020, 1:05am
6
Thanks.
Were these indices created before or after you added the template?
What does your elasticsearch.yml files look like in your 3- node cluster? Have you got minimum_master_nodes set correctly to avoid split brain scenarios ?
======================== Elasticsearch Configuration =========================
NOTE: Elasticsearch comes with reasonable defaults for most settings.
Before you set out to tweak and tune the configuration, make sure you
understand what are you trying to accomplish and the consequences.
The primary way of configuring a node is via this file. This template lists
the most important settings you may want to configure for a production cluster.
Please consult the documentation for further information on configuration options:
---------------------------------- Cluster -----------------------------------
Use a descriptive name for your cluster:
#cluster .name: my-application
------------------------------------ Node ------------------------------------
Use a descriptive name for the node:
node.name: node-1
#node .master: true
#node .data: true
#node .ingest: true
Add custom attributes to the node:
#node .attr.rack: r1
----------------------------------- Paths ------------------------------------
Path to directory where to store the data (separate multiple locations by comma):
path.data: /var/lib/elasticsearch
Path to log files:
path.logs: /var/log/elasticsearch
----------------------------------- Memory -----------------------------------
Lock the memory on startup:
#bootstrap .memory_lock: true
Make sure that the heap size is set to about half the memory available
on the system and that the owner of the process is allowed to use this
limit.
Elasticsearch performs poorly when the system is swapping the memory.
---------------------------------- Network -----------------------------------
Set the bind address to a specific IP (IPv4 or IPv6):
network.host: x.x.x.x
Set a custom port for HTTP:
http.port: 9200
For more information, consult the network module documentation.
--------------------------------- Discovery ----------------------------------
Pass an initial list of hosts to perform discovery when new node is started:
The default list of hosts is ["127.0.0.1", "[::1]"]
discovery.zen.ping.unicast.hosts: ["x.x.x.x","x.x.x.x","x.x.x.x"]
Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#discovery .zen.minimum_master_nodes: 2
For more information, consult the zen discovery module documentation.
---------------------------------- Gateway -----------------------------------
Block initial recovery after a full cluster restart until N nodes are started:
#gateway .recover_after_nodes: 3
For more information, consult the gateway module documentation.
---------------------------------- Various -----------------------------------
Require explicit names when deleting indices:
#action .destructive_requires_name: true
No minimum zen nodes have not been set
Then the cluster is not correctly configured which could cause split brain problems. Make sure minimum_master_nodes is set to 3 on all nodes and restart.
Thanks a lot for the solution.Will try it
Thanks a lot!!!! it worked
system
(system)
Closed
September 23, 2020, 5:17am
14
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.