Optimizing configuration (Help me)

Hi, I would like to ask for your help with the following :

I have two servers on which the elasticsearch is already installed and they
are configured with the following :
cluster.name: essearchlocal

thrift.port: 9500

index.cache.field.type: soft
index.cache.field.max_size: 20000
index.cache.field.expire: 25m
index.refresh_interval: 30s
indices.fielddata.cache.size: 15%
indices.fielddata.cache.expire: 6h
indices.cache.filter.size: 15%
indices.cache.filter.expire: 6h
indices.memory.index_buffer_size: 70%

indices.store.throttle.type: merge
indices.store.throttle.max_bytes_per_sec: 25mb

indices.recovery.concurrent_streams: 3
indices.recovery.max_bytes_per_sec: 20mb
index.translog.flush_threshold_ops: 25000

index.merge.policy.max_merge_size: 500mb
index.store.compress.stored: true

threadpool.search.type: fixed
threadpool.search.size: 20
threadpool.search.queue_size: 1000

threadpool.bulk.type: fixed
threadpool.bulk.size: 20
threadpool.bulk.queue_size: 1000

threadpool.index.type: fixed
threadpool.index.size: 20
threadpool.index.queue_size: 1000
index.store.type: mmapfs

cluster settings

index.merge.policy.max_merged_segment: 1gb # it hasn't been tested yet

indices.fielddata.cache.size: 10% # it hasn't been tested yet

multicast.enabled: false # it hasn't been tested yet

discovery.zen.ping_timeout: 5s # it hasn't been tested yet

discovery.zen.minimum_master_nodes: 2 # it hasn't been tested yet

discovery.zen.ping.unicast.hosts: [""] # it hasn't been tested yet

cluster.routing.allocation.cluster_concurrent_rebalance: 2


Each server has 24GB of physical memory available, but we are running
elasticsearch in a standalone mode.
There are two applications that insert logs into Elasticsearch at the same
time, and 16Gb of memory is added to Elasticsearch for handling requests.
There are 8 GB of memory available for the system.

So our problem is, when a search is started the load avarage of the system
will be 'too high', and the system
will be unusable, and sometimes we will get SearchRequest Exception
(java.lang.outofmemoryerror java heap space) .

Could you suggest me a better configuration for solve these problems ?

We can't provide more server for Elasticsearch and we can run only it on
one node, so we can't run it in cluster mode.

Are the problems solved, if we put more memory into the server on where the
Elasticsearch is running ?

I also wanted to ask how can I make the elasticsearch to recover itself
quickly ?
We have 150 index ( 750 shard ) , and each index size is about 20 GB.

I tried to give high values to recovery options, but it hasn't effect.

indices.recovery.concurrent_streams: 20
indices.recovery.max_bytes_per_sec: 1500mb

Our servers configuration :

  • Intel(R) Xeon(R) CPU E5345 @ 2.33Ghz (8 core)
  • 24 GB memory DDR2
  • 4 TB SSHD ( Hybrid )

(Elasticsearch version : 0.90.10)

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/d286b4ca-0d84-4d1f-a123-d2628649ac10%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Sounds like you need more nodes, which isn't easy to work around.
You can try increasing the RAM to 64GB and then assigning 32GB to ES, but
above that and you start losing to GC.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 21 February 2014 23:20, onthefloorr onthefloorr@gmail.com wrote:

Hi, I would like to ask for your help with the following :

I have two servers on which the elasticsearch is already installed and
they are configured with the following :
cluster.name: essearchlocal

thrift.port: 9500

index.cache.field.type: soft
index.cache.field.max_size: 20000
index.cache.field.expire: 25m
index.refresh_interval: 30s
indices.fielddata.cache.size: 15%
indices.fielddata.cache.expire: 6h
indices.cache.filter.size: 15%
indices.cache.filter.expire: 6h
indices.memory.index_buffer_size: 70%

indices.store.throttle.type: merge
indices.store.throttle.max_bytes_per_sec: 25mb

indices.recovery.concurrent_streams: 3
indices.recovery.max_bytes_per_sec: 20mb
index.translog.flush_threshold_ops: 25000

index.merge.policy.max_merge_size: 500mb
index.store.compress.stored: true

threadpool.search.type: fixed
threadpool.search.size: 20
threadpool.search.queue_size: 1000

threadpool.bulk.type: fixed
threadpool.bulk.size: 20
threadpool.bulk.queue_size: 1000

threadpool.index.type: fixed
threadpool.index.size: 20
threadpool.index.queue_size: 1000
index.store.type: mmapfs

cluster settings

index.merge.policy.max_merged_segment: 1gb # it hasn't been tested yet

indices.fielddata.cache.size: 10% # it hasn't been tested yet

multicast.enabled: false # it hasn't been tested yet

discovery.zen.ping_timeout: 5s # it hasn't been tested yet

discovery.zen.minimum_master_nodes: 2 # it hasn't been tested yet

discovery.zen.ping.unicast.hosts: [""] # it hasn't been tested yet

cluster.routing.allocation.cluster_concurrent_rebalance: 2


Each server has 24GB of physical memory available, but we are running
elasticsearch in a standalone mode.
There are two applications that insert logs into Elasticsearch at the same
time, and 16Gb of memory is added to Elasticsearch for handling requests.
There are 8 GB of memory available for the system.

So our problem is, when a search is started the load avarage of the system
will be 'too high', and the system
will be unusable, and sometimes we will get SearchRequest Exception
(java.lang.outofmemoryerror java heap space) .

Could you suggest me a better configuration for solve these problems ?

We can't provide more server for Elasticsearch and we can run only it on
one node, so we can't run it in cluster mode.

Are the problems solved, if we put more memory into the server on where
the Elasticsearch is running ?

I also wanted to ask how can I make the elasticsearch to recover itself
quickly ?
We have 150 index ( 750 shard ) , and each index size is about 20 GB.

I tried to give high values to recovery options, but it hasn't effect.

indices.recovery.concurrent_streams: 20
indices.recovery.max_bytes_per_sec: 1500mb

Our servers configuration :

  • Intel(R) Xeon(R) CPU E5345 @ 2.33Ghz (8 core)
  • 24 GB memory DDR2
  • 4 TB SSHD ( Hybrid )

(Elasticsearch version : 0.90.10)

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/d286b4ca-0d84-4d1f-a123-d2628649ac10%40googlegroups.com
.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624ZC0-UXw3x24F6q15%2BSRrc0yp8EVSd4F%3DW8VxuWtf4mNA%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.