Hi ,
I'm new to elastic search.
I have few questions for designing elasticsearch architecture.
-
How to re-index if an shard got corrupted with large data? what is the best way to do it?.
-
what will be the delta time between primary shard and replica shard?. How do we measure it?.
-
how do I quantify indexing with large dataset?.
-
what will happen if primary shard corrupted before replication?.
-
what is the best practice to scale elasticsearch , if we get data 1TB per day and eventually increases to 100TB per
day?. In this scenario what is the recommended no of shards and replica?. -
Do we have a sizing tool for elasticsearch, Considering cpu/RAM/storage/network ? if we dont have how to calculate it?
-
Do elasticsearch compress data?. if it compress data, what is the best compression method it uses and how it restore back the data when it required?.
-
Do elasticsearch has archive mechanism?
-
Can I use NFS volume for elasticsearch?.
-
During elasticsearch software upgrade how much downtime should i take?
-
what is the best practice, I need to follow if my entire data got corrupted ?.
-
During data push from logstash to elasticsearch , if part of data got corrupted what should i do?.
I had asked many questions and i realize is too much but I'm eager to know it.
Your help is appreciated.
-Saravanan