File system becomes read only and all shards fail


(Divya) #1

Hi,

I am using elastic search to index ~5000 documents every 30 seconds. After ~18 hours the file systems becomes read-only and later all shards for that cluster fail. I get 503 response o trying to access index. and if I try to restart the docker i see below error.

{
"error" : {
"root_cause" : [ ],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [ ]
},
"status" : 503
}
Error response from daemon: Cannot restart container 1bb: Error getting container 1bbde2842c241d25244c393ff976946b8029bb061a9a023f42d545dbe77b9c73 from driver devicemapper: Error mounting '/dev/mapper/docker-253:2-50449012-1bbde2842c241d25244c393ff976946b8029bb061a9a023f42d545dbe77b9c73' on '/var/lib/docker/devicemapper/mnt/1bbde2842c241d25244c393ff976946b8029bb061a9a023f42d545dbe77b9c73': invalid argument
Error: failed to restart containers: [1bb]

To add more details:
This happens on 6.0 as well as 6.1 I am currently using docker for installing the cluster.


(David Pilato) #2

Please format your code, logs or configuration files using </> icon as explained in this guide and not the citation button. It will make your post more readable.

Or use markdown style like:

```
CODE
```

There's a live preview panel for exactly this reasons.

Lots of people read these forums, and many of them will simply skip over a post that is difficult to read, because it's just too large an investment of their time to try and follow a wall of badly formatted text.
If your goal is to get an answer to your questions, it's in your interest to make it as easy to read and understand as possible.
Please update your post.

Could you share the elasticsearch logs as well?


(Divya) #3

Thank you for the response. I think I found the issue:

sudo docker info
Containers: 4
Images: 175
Server Version: 1.9.0
Storage Driver: devicemapper
 Pool Name: docker-253:2-50449012-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 107.4 GB
 Backing Filesystem: xfs
 Data file: /dev/docker_data
 Metadata file: /dev/docker_md
 Data Space Used: 11.92 GB <<<<<<<<<<<<<<<<<
 Data Space Total: 14 GB
 Data Space Available: 2.083 GB

Data space used continuously increases with the elastic search cluster. Something is dumping stuff to the container file system rather than devmapper:

[root@ce6015515c39 elasticsearch]# df -kh
Filesystem                                                                                          Size  Used Avail Use% Mounted on
/dev/mapper/docker-253:2-50449012-ce6015515c395c52c3ccc6496bf5d644bf5069aac5da0b0f852a64312158fe3f   99G  2.0G   92G   3% /
tmpfs                                                                                               3.4G     0  3.4G   0% /dev
tmpfs                                                                                               3.4G     0  3.4G   0% /sys/fs/cgroup
/dev/vda2                                                                                            10G  5.4G  4.7G  54% /etc/hosts
shm                                                                                                  64M     0   64M   0% /dev/shm
[root@ce6015515c39 elasticsearch]# 

Are there any settings which I may be missing.


(Divya) #4

Here's the setting

[root@ce6015515c39 elasticsearch]# more config/elasticsearch.yml 
cluster.name: "docker-cluster"
network.host: 0.0.0.0

# minimum_master_nodes need to be explicitly set when bound on a public IP
# set to 1 to allow single node clusters
# Details: https://github.com/elastic/elasticsearch/pull/17288
discovery.zen.minimum_master_nodes: 1
xpack.license.self_generated.type: basic
[root@ce6015515c39 elasticsearch]# 

(system) closed #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.