I am using elasticsearch version 1.0.1
I noticed index sizes reduce by 40% from their initial size, is there any
overhead to indexing or searching with this compression?
Is compression automatic now, or is this parameter still used?
"index.store.compress.stored" : "true"
Is this refresh value too high? I index about 150,000 msg/sec at peak
rates.
"index.refresh_interval" : "180s"
Here are a few other settings that I would like scrutinized.
"index.cache.field.type" : "resident"
"index.replication" : "async"
I index a lot of data, but also support a lot of queries, so I have these
set:
index.translog.flush_threshold_ops: 50000
indices.memory.index_buffer_size: 50%
index.cache.field.type: node
indices.cache.filter.size: 40%
index.fielddata.cache: node
indices.fielddata.cache.size: 40%
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/820cd028-5985-407a-b39e-49dd5aa30f33%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.