Please, do we have to setup a curator on all nodes of the cluster (each node is on a server) ?
No. Only one instance is necessary, and it doesn’t have to be on a cluster node. It can be anywhere that has access to an http/https port in the cluster.
I don't know why Elasticsearch doesn't want to load :
[root@frghcslnetv12 ~]# systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2018-06-13 11:53:44 CEST; 10s ago
Docs: http://www.elastic.co
Process: 24501 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
Main PID: 24501 (code=exited, status=1/FAILURE)
Jun 13 11:53:42 frghcslnetv12 systemd[1]: Started Elasticsearch.
Jun 13 11:53:42 frghcslnetv12 systemd[1]: Starting Elasticsearch...
Jun 13 11:53:44 frghcslnetv12 systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
Jun 13 11:53:44 frghcslnetv12 systemd[1]: Unit elasticsearch.service entered failed state.
Jun 13 11:53:44 frghcslnetv12 systemd[1]: elasticsearch.service failed.
[root@frghcslnetv12 elasticsearch]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vgroot-root 22132164 6197168 15934996 29% /
devtmpfs 3980968 0 3980968 0% /dev
tmpfs 3997156 0 3997156 0% /dev/shm
tmpfs 3997156 12220 3984936 1% /run
tmpfs 3997156 0 3997156 0% /sys/fs/cgroup
/dev/sda1 1038336 212980 825356 21% /boot
/dev/mapper/vgdata-lvdata 50442784 47857392 0 100% /data
tmpfs 799432 0 799432 0% /run/user/0
My data base is full , I use curator in the other node but it doesnt clear this node, can you help me ?
This could be for a couple of reasons.
- One of your indices is big enough that its shards overwhelmed one box.
- You had one or more nodes fail, and Elasticsearch tried to promote primaries on the remaining boxes, filling them.
- Your nodes are not (or are no longer) clustered properly (a split-brain scenario perhaps), so a delete to one node did not propagate across the entire cluster.
- Something I haven't listed here.
If you're full, there's not much I can do to help. You'll have to delete some or all of what's in the /data
path. Hopefully you can restore from a snapshot.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.