Hi we are using elastic search and have a per day index with a event per second of around 900 and each event having a size of .5 ~ 2 kB . The problem is that our elastic search is occupying a lot of memory .. can any one suggest how to tune to reduce the memory usage. FYI the version we use is 2.2 and there is no scope for upgrade in the near future
How many shards do you have in your cluster, and how many nodes? Each shard requires a certain amount of memory, even if empty. A common problem is to have too many shards. Here is an article with more details:
You can also tune the size of the JVM's heap. Versions as old as 2.2 may become unstable if you do not give them enough heap; more recent versions are much more stable in memory-constrained environments.
We have one shard for each day and have only one node .. but is it normal to occupy 12 gb for such data
Yes, but how many shards do you have in the cluster? One per day, but for how many days? With daily indices, 2kb per event * 1 event per second works out as ~180MB per shard which is far smaller than the recommended size. If you dropped down to 1 shard per month then it'd still only be 5GB per index.
12GB of memory usage is not unusual, but it depends on your JVM settings.