Cluster block exception

hi there!

now i got cluster_block_exception in kibana. but when i check the space, it is available 93.2gb.

shards disk.indices disk.used disk.avail disk.total disk.percent host          ip            node
   287       24.9gb    26.8gb      7.4gb     34.3gb           78 x.x.x.x  x.x.x.x  TvK9fQh
     0           0b       5gb     93.2gb     98.3gb            5 x.x.x.x x.x.x.x sdJudfu
   288       25.6gb    27.4gb      6.8gb     34.3gb           80 x.x.x.x  x.x.x.x  XBIvEen
   287       24.9gb    26.8gb      7.5gb     34.3gb           78 x.x.x.x x.x.x.x Iq3rplM

in this case, which node is written by logstash?
how can i know?

i would appreciated if someone point out to me.

Please don't post images of text as they are hardly readable and not searchable.

Instead paste the text and format it with </> icon. Check the preview window.

Is the node with the large amount of available disk a data node? Is it running exactly the same version of Elasticsearch as the other nodes?

yes, i did paste, but it's not readable, cols and cols data are mismatch that's why i post img.

It's because you are not using </> to format your code. You can also use markdown style like

```
CODE
```

noted with thanks

yes it is a node. and versions are same 5.5.2. actually we use aws ec2.
last time we use t2.micro then got cluster block exception after several weeks run.
we found that it's disk space full.
then we increase t2.micro to c4.large. it was ok.
but wired after 2 weeks, cluster block exception happened again.
the large amount of available disk is c4.

what should i do @Christian_Dahlqvist? :nerd_face:

Are all the nodes running exactly the same version of Elasticsearch? Can you show us the output of the cat nodes API:

curl -X GET "localhost:9200/_cat/nodes?v&h=id,ip,port,v,m"
id   ip            port v     m
XBIv x.x.x.x  9300 5.5.2 -
TvK9 x.x.x.x  9300 5.5.2 -
Iq3r x.x.x.x 9300 5.5.2 *
sdJu x.x.x.x 9300 5.5.2 -

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.