Testing searchable snapshots on Flash based fast NFS gave errors with cache:false
- Using a 20 node Elastic cluster version 7.5.0, took snapshot of a 3.2TB index (40 shards, no replicas) and stored in NFS mount.
- Built Elasticsearch 8.0 from github master branch. Setup on 20 node cluster. Enable trial lic.
- Registered the NFS repo with path.repo mount directory. Shows the snapshots correctly.
- Mounted the index from snapshot with
index.store.snapshot.cache.enabled:false
{
"index": "apc-original-idx",
"renamed_index": "apc-snapshot-idx",
"index_settings":
{
"index.number_of_replicas": 0,
"index.store.snapshot.cache.enabled": "false",
"index.store.snapshot.cache.prewarm.enabled": "false" },
"ignored_index_settings": [ "index.refresh_interval" ]
}'
- When I search on this mounted index from snapshot, got errors.
{
"error" : {
"root_cause" : [
{
"type" : "e_o_f_exception",
"reason" : "Reading past end of file [position=73972115, length=6940751] for DirectBlobContainerIndexInput{resourceDesc=randomaccess, fileInfo=[name: __QaYB1LDURCSIkt840sPV8g, numberOfParts: 1, partSize: 8192pb, partBytes: 9223372036854775807, metadata: name [_md.nvd], length [34703814], checksum [1c71uzc], writtenBy [8.3.0]], offset=6940794, length=6940751, position=7341534}"
},
{
"type" : "runtime_exception",
"reason" : "runtime_exception: Invalid vInt detected (too many bits)"
},
{
"type" : "array_index_out_of_bounds_exception",
"reason" : "Index -79 out of bounds for length 33"
},
{
"type" : "array_index_out_of_bounds_exception",
"reason" : "Index -64 out of bounds for length 33"
},
When I mount from snapshot with index.store.snapshot.cache.enabled:true
, search works, however see a lot data written (300-350GB) to hot tier in path.data folder. After each search additional data is written.
Is it possible to make searchable snapshots work without moving large data from path.repo to path.data?
What does setting "index.store.snapshot.uncached_chunk_size": "4k"
do? Did not see any change when I set this size.