How to increase the number of shard in an exsisting index

Dear all,
i have an elastic to collect log
when i read the document they say that the default number of shard per index are 5 but when i create the index there are only one index, so how can i increate the number of shard in that index without having to create a template to create a new index

thank you.

Why do you want to increase the number of shards?

But to answer your question, you can use the split API or you can create a new index and reindex.

You should probably change the template as well if you want that to happen automatically every day.

Thank for responding
i want to increase the numbers of shard because i was hoping that it would help me with the time out problems that i keep having when viewing a huge size log index

What is the output of:

GET /
GET /_cat/nodes?v
GET /_cat/health?v
GET /_cat/indices?v

The output of GET / is

{
  "name" : "coordinate-node-01",
  "cluster_name" : "",
  "cluster_uuid" : "L9t1WyZfRzyomh4AOnNRgQ",
  "version" : {
    "number" : "7.4.0",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "22e1767283e61a198cb4db791ea66e3f11ab9910",
    "build_date" : "2019-09-27T08:36:48.569419Z",
    "build_snapshot" : false,
    "lucene_version" : "8.2.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

the output of GET /_cat/nodes?v is:

ip         heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
            6          96  27    0.73    0.94     1.01 m         -      master-node-02
          33          92   7    0.45    0.44     0.37 -         -      coordinate-node-02
            5          96  33    1.63    1.38     1.47 mv        -      master-node-03
           15          75   0    0.28    0.37     0.43 d         -      data-node-01
           23          98  13    3.34    3.62     4.04 d         -      data-node-03
            7          92   1    0.09    0.10     0.08 i         -      ingest-node-01
           27          97  36    1.25    1.29     1.28 m         *      master-node-01
           15          98   5    1.84    1.99     2.21 d         -      data-node-02
           64          92   7    0.06    0.18     0.22 -         -      coordinate-node-01

the output of

epoch      timestamp cluster        status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1572855595 08:19:55   green           9         3    120  74    0    0        0             0                  -                100.0%

and the output of GET /_cat/indices?v is:

health status index                             uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   wineventlog-2019.10               w78FS0ESQ6SDBFmP6cVAVw   1   0 1371220897            0    690.6gb        690.6gb
green  open   wineventlog-2019.11               Q6YaIoZgRH-8RhwqYgXDhg   1   1  837432644            0        1tb        515.9gb

When i view the log in the wineventlog index in more than 1 hours it always timeout

That's indeed too much data for one single shard.

You probably need at least 14 primary shards per index.
Another solution would be to change from daily indices to hourly indices or use the rollover API and rollover every 50gb...

Apart from increasing the shards which looks very critical, you can also increase your replicas to divert your read/write traffic. But replicas do come with extra cost.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.