Node cluster

Hello everyone,

What could happen if in my cluster, all nodes are data nodes?
Do we have to have, within the cluster, a master node to manage the other nodes?

"nodes" : {
    "count" : {
      "total" : 4,
      "master_only" : 0,
      "data_only" : 0,
      "master_data" : 4,
      "client" : 0
    },

Thank you.

For a 4 nodes cluster, everything is correct here.
You have 4 nodes that can store/index data. You have 4 nodes that can become master if the current master node is failing.

Just don't forget to define the discovery.zen.minimum_master_nodes to 3 in your elasticsearch.yml files.

No, I did not forget it. By cons, I see a small failure in my dashboards. Too slow and sometimes it fails.

RAM is 80% used

What should I change?

"process" : {
      "cpu" : {
        "percent" : 70
      },
      "open_file_descriptors" : {
        "min" : 1700,
        "max" : 1748,
        "avg" : 1729
      }
    },
  
      "mem" : {
        "heap_used" : "36.6gb",
        "heap_used_in_bytes" : 39362479080,
        "heap_max" : "47.8gb",
        "heap_max_in_bytes" : 51400146944
      },
      "threads" : 280
    },
    "fs" : {
      "total" : "2.5tb",
      "total_in_bytes" : 2834552881152,
      "free" : "1.1tb",
      "free_in_bytes" : 1242280148992,
      "available" : "1022.8gb",
      "available_in_bytes" : 1098293264384,
      "spins" : "true"
    },

Hi!

Can you tell me what api return this?

I have started 2 nodes in my elasticsearch cluster, on the same machine both, but it seems there is an error on shards allocation.

`

curl -X GET "IP:9200/_cluster/stats?human&pretty"

`

1 Like

For this question, have you tried to use this?

PUT /_cluster/settings
{
  "transient": {
    "cluster.routing.allocation.disk.watermark.low": "90%"
  }
}

i don't think it resolves your problem but you can try

 curl -X GET  "IP:9200/_cluster/settings?human&pretty"
{
  "persistent" : { },
  "transient" : { }
}

It seems you don't have any persistent or transient settings? Is it?

Yes exactly. But this has no direct link with the problem I currently have

Yes, but i don't know why your dashboard fails.

I suppose that the problem could come from the fact that all my nodes are of type data. I do not really know.

No, I did not forget it. By cons, I see a small failure in my dashboards. Too slow and sometimes it fails.

Can you share Elasticsearch logs?

What is the output of:

GET /_cat/nodes?v
GET /_cat/indices?v
GET /_cat/health?v
curl -X GET  "XXXXXXXX1:9200/_cat/health?v"
epoch      timestamp cluster           status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1533809057 12:04:17  PR1_elasticsearch green           4         4    273 273    0    0        0             0                  -                100.0%



curl -X GET  "xxxxxxxx:9200/_cat/nodes?v"
host        ip          heap.percent ram.percent load node.role master name
xxxxxxx     xxxxxxxx           75          99 1.03 d         m      xxxxx
xxxxxxx     xxxxxxxx           76          99 1.15 d         *      xxxxxx
xxxxxxx     xxxxxxxx           77          96 0.96 d         m      xxxxxx
xxxxxxx     xxxxxxxx           77          96 0.83 d         m      xxxxxxx

When you replace values, please make sure to replace by the same number of characters to preserve indentation. This is hard to read here.

You did not share:

GET /_cat/indices?v

Please do.

Also:

GET /_cat/nodes?v&h=id,hc,hm,rc,rm,r
 curl -X GET  "IP1:9200/_cat/indices?v"
health status index                               pri rep docs.count docs.deleted store.size pri.store.size
       close  idx-pr1-2018.05.31
green  open   idx-pr1-2018.07.10                    4   0   26810265       233580     27.2gb         27.2gb
green  open   idx-pr1-2018.07.12                    4   0   44746353            0     41.4gb         41.4gb
green  open   idx-pr1-2018.07.11                    4   0   24837757            0     24.3gb         24.3gb
green  open   idx-pr1-2018.07.14                    4   0   25572880            0       22gb           22gb
green  open   idx-pr1-2018.07.13                    4   0   38307116            0     37.6gb         37.6gb
       close  idx-pr1-2018.05.30
green  open   flex2gateway                          4   0          0            0       636b           636b
green  open   idx-pr1-2018.07.15                    4   0   23033981            0     19.8gb         19.8gb
green  open   idx-pr1-2018.07.16                    4   0   27849293            0     26.4gb         26.4gb
green  open   idx-pr1-2018.07.17                    4   0   29240119            0       29gb           29gb
green  open   idx-pr1-2018.07.18                    4   0   31652960            0     32.2gb         32.2gb
green  open   idx-pr1-2018.07.19                    4   0   34845354            0     33.5gb         33.5gb
green  open   sawmill6cl.exe                        4   0          0            0       636b           636b
green  open   idx-pr1-2018.07.21                    4   0   29075955            0     24.1gb         24.1gb
green  open   idx-pr1-2018.07.20                    4   0   31747500            0     29.7gb         29.7gb
green  open   idx-pr1-2018.07.25                    4   0   28928470            0     27.8gb         27.8gb
green  open   idx-pr1-grokparsefailure-2018.07.31   4   0      13483            0      4.2mb          4.2mb
 .........
green  open   idx-pr1-2018.08.07                    4   0   18751654            0     17.5gb         17.5gb
green  open   idx-pr1-2018.08.08                    4   0   28011365            0     25.7gb         25.7gb
green  open   idx-pr1-2018.08.05                    4   0   30826869            0     24.9gb         24.9gb
green  open   idx-pr1-2018.08.06                    4   0   22930264            0     21.6gb         21.6gb
green  open   webui                                 4   0          0            0       636b           636b
green  open   idx-pr1-2018.07.30                    4   0   51936723            0     43.7gb         43.7gb
green  open   idx-pr1-2018.07.31                    4   0   56453887            0     47.9gb         47.9gb
green  open   idx-pr1-2018.08.09                    4   0   15500095            0     17.5gb         17.5gb
green  open   .kibanapr1                            1   0       3020            7      1.4mb          1.4mb
green  open   perl                                  4   0          0            0       636b           636b
green  open   idx-pr1-2018.08.04                    4   0   51318754            0     42.5gb         42.5gb
green  open   idx-pr1-2018.08.03                    4   0   39364728            0     37.3gb         37.3gb
green  open   idx-pr1-2018.08.02                    4   0   39895818            0       37gb           37gb
green  open   idx-pr1-2018.08.01                    4   0   47127645            0     41.8gb         41.8gb
       close  idx-pr1-2018.05.14
green  open   idx-pr1-grokparsefailure-2018.08.05   4   0      76514            0       13mb           13mb
green  open   idx-pr1-grokparsefailure-2018.08.04   4   0      78134            0     13.8mb         13.8mb
       close  idx-pr1-2018.05.13
green  open   idx-pr1-grokparsefailure-2018.08.03   4   0      77126            0       14mb           14mb
       close  idx-pr1-2018.05.16
green  open   idx-pr1-grokparsefailure-2018.08.02   4   0      24745            0      5.5mb          5.5mb
       close  idx-pr1-2018.05.15
       close  idx-pr1-2018.05.18
green  open   idx-pr1-grokparsefailure-2018.08.09   4   0      36836            0      8.6mb          8.6mb
       close  idx-pr1-2018.05.17
green  open   idx-pr1-grokparsefailure-2018.08.08   4   0      78418            0     14.6mb         14.6mb
green  open   idx-pr1-grokparsefailure-2018.08.07   4   0      77155            0     14.5mb         14.5mb
       close  idx-pr1-2018.05.19
green  open   idx-pr1-grokparsefailure-2018.08.06   4   0      79260            0     15.2mb         15.2mb
green  open   idx-pr1-2018.07.03                    4   0   34240138            0     32.8gb         32.8gb
green  open   idx-pr1-2018.07.02                    4   0   43707829            0     38.9gb         38.9gb
green  open   idx-pr1-2018.07.01                    4   0    8913476            0      7.3gb          7.3gb
       close  idx-pr1-2018.05.22
       close  idx-pr1-2018.05.23
       close  idx-pr1-2018.05.20
green  open   idx-pr1-grokparsefailure-2018.08.01   4   0       1537            0      4.7mb          4.7mb
       close  idx-pr1-2018.05.21
green  open   lcds                                  4   0          0            0       636b           636b
green  open   blazeds                               4   0          0            0       636b           636b
       close  idx-pr1-2018.05.27
       close  idx-pr1-2018.05.26
       close  idx-pr1-2018.05.25
green  open   idx-pr1-2018.07.08                    4   0   17424378            0     14.3gb         14.3gb
green  open   idx-pr1-2018.07.09                    4   0   26994642            0     25.5gb         25.5gb
       close  idx-pr1-2018.05.24
green  open   idx-pr1-2018.07.06                    4   0   36173805            0     33.7gb         33.7gb
green  open   idx-pr1-2018.07.07                    4   0   23657449            0     21.8gb         21.8gb
green  open   idx-pr1-2018.07.04                    4   0   25739564            0     25.2gb         25.2gb
       close  idx-pr1-2018.05.29
green  open   messagebroker                         4   0          0            0       636b           636b
       close  idx-pr1-2018.05.28
green  open   idx-pr1-2018.07.05                    4   0   25102937            0     24.6gb         24.6gb
curl -X GET  "IP:9200/_cat/nodes?v&h=id,hc,hm,rc,rm,r"
id      hc     hm     rc     rm r
nceF   9gb 11.9gb 14.6gb 15.5gb d
JQgq 9.2gb 11.9gb 15.1gb 15.5gb d
5_Em 9.2gb 11.9gb   15gb 15.5gb d
gNzA 9.1gb 11.9gb 14.9gb 15.5gb d

You might probably have too many shards per node.

May I suggest you look at the following resources about sizing:

https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing

In most if not all cases, your daily indices could probably be ok with one single or 2 shards instead of 4.

You also have a lot of closed index which is ok but may be you'd like to remove old indices?
You set the HEAP to 12gb on a 16gb machine. You should not use more than the half of available RAM. So heap should be 8gb at most.
May be you could start new nodes if you still need to keep all the data you have though?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.