I have ES 0.90.5 in my Logstash setup (Debian 6). I have indices which are
rotated daily and each day's index would amount to 12G. I'm planning to
retain 20 days of indexes, therefore at any time ES would maintain an index
of more or less 240G of indexers. As of now the total index size is only
58G.
More often than not, when a search is issued (over say, last 5 days
indexes) I see ES's "used heap memory" shoot up and remain almost like 500M
less than the "committed heap memory" which is 4.6G. Is it because of the
caches building up? I'm not sure as to how the filter and field cache
builds up here. What I'm worried is, that when our support team start using
it there will be more simultaneous queries to ES (through Kibana 3) and
this might lead to exhaustion of the Java Heap Mem. My current configs are
as:
ES_HEAP_SIZE=4800m
MAX_OPEN_FILES=65535
MAX_LOCKED_MEMORY=unlimited
index.number_of_shards: 1
index.number_of_replicas: 0
index.translog.flush_threshold_ops: 2000
indices.memory.index_buffer_size: 40%
index.fielddata.cache: soft
indices.fielddata.cache.size: 2%
bootstrap.mlockall: true
I also have set mappings for the indexes -
This is only a single ES server and no cluster. The total system RAM is 8G.
- How does field and filter caches work?
- Are there any additional performance configs I should consider?
Please advise.
Thanks,
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.