hi
I am writing a real time analytics tool using kafka,storm and elasticsearch
and want a elasticsearch that is write optimized for about 80K/sec
inserts(7 machine cluster).
For the purpose of high performance i using bulk udp to batch insert my
doc(each doc is about 300B, only 4 field are indexed),and i'm using index.store.type=niofs
for index store(mmap cause io-util 100%),but it's still seem not good
enough for my scene, all thing i need is writing performance, anybody has
a idea for this problem?
here is my conf:
bootstrap.mlockall: true
threadpool.bulk.type: fixed
threadpool.bulk.size: 30
threadpool.bulk.queue_size: 500
index.refresh_interval: 50s
indices.store.throttle.type: merge
indices.store.throttle.max_bytes_per_sec: 80mb
indices.memory.index_buffer_size: 30%
indices.ttl.bulk_size: 100000
indices.memory.min_shard_index_buffer_size: 200mb
bulk.udp.enabled : true
bulk.udp.bulk_actions : 10000
bulk.udp.bulk_size : 20mb
bulk.udp.flush_interval : 10s
bulk.udp.concurrent_requests : 4000
bulk.udp.receive_buffer_size : 10mb
index.cache.field.expire: 10m
index.cache.field.max_size: 500000
index.cache.field.type: soft
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/c444a899-dcb3-442a-a33e-2b987fcad2e6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.