S3 optimization in elk

Elasticsearch will need to access index files on a regular basis in order to determine whether indices contain relevant data or not. If you were using searchable snapshots with partially mounted indices, most of the data would be in S3 while frequently accessed files would be held in a local cache, reducing the need for S3 access. If you have mounted a S3 bucket as a data volume I would expect quite a lot of random access. Standard S3 may be fine with this (I have never used this type of setup) but as far as I know glacial S3 would be a very poor match. I am not sure whether it would work at all but would expect it to severely impact query performance negatively.

Got it Christian.Appreciate your response!
Just to clarify in case if we using f you were using searchable snapshots with partially mounted indices, is the solution from s3 to s3 glacier with data search work?

S3 glacier is designed for data that is rarely accessed. That will not be the case for any data that is searchable at will. I therefore do not think glacial S3 is suitable at all in any of these scenarios.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.