Write throughput test on elasticsearch 9 high configuration nodes cluster,

with logstash and elasticsearch we want to continue write 20k Msg/s, the
write performance is* not as expected. story as bellow:*

*HW: *

9 nodes cluster, each one with:
2 socked , 32 threads cpu
128G mem
1T Disks (Raid 1)

SW:
RHEL6.3

logstash 1.1.10
redis as channel
ES 0.20.5

ES memory limit to 65G :

export ES_MIN_MEM=65g
export ES_MAX_MEM=65g

*Logstash index setting: *

shards: 9
replication: 1
_all and _source are disabled

*Redis on 7 of 9 nodes, *
each box have 1 logstash instance read from it's own box's redis channel.
and output to localhost's elasticsearch_http (also tried elasticsearch way)
I adjust the redis input's
batch_count => 2000
threads => 5
and output's
flush_size => 10000

to make it fast.

it can handle the message rush in at first, but when the index growing big
(index each day) I use Kibana browser logs such as choose long time window,
the ES write in performance drop down immediately.
mesg got accumulate inside the redis channel, then logstash can never
finish it's job inside redis.

any suggestion, guys?

I don't think ES are not good enough for this kind of volume data, but I
don't know what's next step to make it continue stay with high speed write
in.

Thanks!
-Ryan

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

1 Like

just share another error as bellow, which I always encounter this when
using elasticsearch output way,

:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.transport.RemoteTransportException:

this will make logstash refuse to work. (not always but enough buzz)
I encounter it when using LG0.1.9 pair with ES0.20.2, then I upgrade to LG0.1.10
pair with ES0.20.5.
this error keep on generate.

so I have to use elasticsearch_http as output way, though it's slow, but at
least it works

On Friday, April 26, 2013 11:00:00 PM UTC+8, Ryan Qian wrote:

with logstash and elasticsearch we want to continue write 20k Msg/s, the
write performance is* not as expected. story as bellow:*

*HW: *

9 nodes cluster, each one with:
2 socked , 32 threads cpu
128G mem
1T Disks (Raid 1)

SW:
RHEL6.3

logstash 1.1.10
redis as channel
ES 0.20.5

ES memory limit to 65G :

export ES_MIN_MEM=65g
export ES_MAX_MEM=65g

*Logstash index setting: *

shards: 9
replication: 1
_all and _source are disabled

*Redis on 7 of 9 nodes, *
each box have 1 logstash instance read from it's own box's redis channel.
and output to localhost's elasticsearch_http (also tried elasticsearch way)
I adjust the redis input's
batch_count => 2000
threads => 5
and output's
flush_size => 10000

to make it fast.

it can handle the message rush in at first, but when the index growing big
(index each day) I use Kibana browser logs such as choose long time window,
the ES write in performance drop down immediately.
mesg got accumulate inside the redis channel, then logstash can never
finish it's job inside redis.

any suggestion, guys?

I don't think ES are not good enough for this kind of volume data, but I
don't know what's next step to make it continue stay with high speed write
in.

Thanks!
-Ryan

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

The way to scale to your write capacity is by increasing your shard count.
More shards mean more concurrency.

You mentioned you have 9 nodes and 9 shards so I assume that's one shard
per node? If you have a CPU with 32 cores, then running only one shard
doesn't fully utilize the CPU. Try running 10 shards on each node for a
total of 90 shards. This will break the index into smaller segments and
allow for better throughput.

The general idea is to run some performance tests to identify the capacity
of a single node with your given hardware configuration (i.e., what's the
optimal number of shards to achieve your target throughput).

Since your talking about logging, you'll also want to roll the logs every
so often into a new index.

-Eric

On Friday, April 26, 2013 11:10:26 AM UTC-4, Ryan Qian wrote:

just share another error as bellow, which I always encounter this when
using elasticsearch output way,

:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.transport.RemoteTransportException:

this will make logstash refuse to work. (not always but enough buzz)
I encounter it when using LG0.1.9 pair with ES0.20.2, then I upgrade to
LG0.1.10 pair with ES0.20.5. this error keep on generate.

so I have to use elasticsearch_http as output way, though it's slow, but
at least it works

On Friday, April 26, 2013 11:00:00 PM UTC+8, Ryan Qian wrote:

with logstash and elasticsearch we want to continue write 20k Msg/s, the
write performance is* not as expected. story as bellow:*

*HW: *

9 nodes cluster, each one with:
2 socked , 32 threads cpu
128G mem
1T Disks (Raid 1)

SW:
RHEL6.3

logstash 1.1.10
redis as channel
ES 0.20.5

ES memory limit to 65G :

export ES_MIN_MEM=65g
export ES_MAX_MEM=65g

*Logstash index setting: *

shards: 9
replication: 1
_all and _source are disabled

*Redis on 7 of 9 nodes, *
each box have 1 logstash instance read from it's own box's redis
channel. and output to localhost's elasticsearch_http (also tried
elasticsearch way)
I adjust the redis input's
batch_count => 2000
threads => 5
and output's
flush_size => 10000

to make it fast.

it can handle the message rush in at first, but when the index growing
big (index each day) I use Kibana browser logs such as choose long time
window, the ES write in performance drop down immediately.
mesg got accumulate inside the redis channel, then logstash can never
finish it's job inside redis.

any suggestion, guys?

I don't think ES are not good enough for this kind of volume data, but I
don't know what's next step to make it continue stay with high speed write
in.

Thanks!
-Ryan

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

To add to 'egaumer': rolling the logs into a new index would mean that you
rotate your indices (either based on size, or say every day, every week)
while keeping an up to date alias to the 'current' index, so that the index
rotation is transparent to the users. This also allows you to change the
number of primary shards for new indexes as you get better understanding of
your performance needs (so you are not stuck with your initial choice of
number of primaries).

On Friday, April 26, 2013 8:00:00 AM UTC-7, Ryan Qian wrote:

with logstash and elasticsearch we want to continue write 20k Msg/s, the
write performance is* not as expected. story as bellow:*

*HW: *

9 nodes cluster, each one with:
2 socked , 32 threads cpu
128G mem
1T Disks (Raid 1)

SW:
RHEL6.3

logstash 1.1.10
redis as channel
ES 0.20.5

ES memory limit to 65G :

export ES_MIN_MEM=65g
export ES_MAX_MEM=65g

*Logstash index setting: *

shards: 9
replication: 1
_all and _source are disabled

*Redis on 7 of 9 nodes, *
each box have 1 logstash instance read from it's own box's redis channel.
and output to localhost's elasticsearch_http (also tried elasticsearch way)
I adjust the redis input's
batch_count => 2000
threads => 5
and output's
flush_size => 10000

to make it fast.

it can handle the message rush in at first, but when the index growing big
(index each day) I use Kibana browser logs such as choose long time window,
the ES write in performance drop down immediately.
mesg got accumulate inside the redis channel, then logstash can never
finish it's job inside redis.

any suggestion, guys?

I don't think ES are not good enough for this kind of volume data, but I
don't know what's next step to make it continue stay with high speed write
in.

Thanks!
-Ryan

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Start with the slowest part of your system to get the most improvement.
Have you taken I/O measurements of your hardware? Can you provide numbers?

Can you explain how you find the numbers for the config you describe?
How is "1T disk on RAID 1" organized? Just 2 disks? What capacity is
"20k msg/s", do you know the average size of a message? And, what are
your expectations how ES should perform?

Ramping up an oversized heap to begin with is most peculiar, but I hope
you know what you are doing. Note that a filled heap of such size might
become a challenge in many aspects. You must watch out for them, I think
you know all of them. You must enable logging in ES to understand the
GC messages so you know why ES works "not es expected". Mostly it will
starve by I/O waits or it is being overwhelmed by GC pauses, slowly
bringing the JVM to a halt and throughput to zero. That is not ES fault

  • it's just a badly configured JVM, or a badly configured disk subsystem.

My recommendation is to switch to latest Java 7 with G1 GC for low
latency GC, and start with a heap to around 8G, and let the other RAM
for the OS activity. See if increasing the heap helps your workload or
not, probably by 4G or 8G steps (16G, 24G, 32G ...) You have to find the
number where performance is best. Also, the ES segment merging config
should be accommodated to handle larger merges (default is 5G).

By turning these knobs (and maybe more), you have to get some intuitive
findings for yourself to get your system balanced out, so that all your
input data stream will get indexed instantaneously, even after having ES
running for a long time period. Each system behaves differently. Do not
trust others numbers, take measurements and tests for yourself.

Of course, you should follow advice from the Logstash community for best
practices how to organize the indexing (rotating indexes etc.)

Jörg

Am 26.04.13 17:00, schrieb Ryan Qian:

with logstash and elasticsearch we want to continue write 20k Msg/s,
the write performance isnot as expected. story as bellow:

*HW: *

 9 nodes cluster, each one with:
2 socked , 32 threads cpu
128G mem
1T Disks (Raid 1)

SW:
RHEL6.3

logstash 1.1.10
redis as channel
ES 0.20.5

ES memory limit to 65G :

export ES_MIN_MEM=65g
export ES_MAX_MEM=65g

*Logstash index setting: *

shards: 9
replication: 1
_all and _source are disabled

*Redis on 7 of 9 nodes, *
each box have 1 logstash instance read from it's own box's redis
channel. and output to localhost's elasticsearch_http (also tried
elasticsearch way)
I adjust the redis input's
batch_count => 2000
threads => 5
and output's
flush_size => 10000

to make it fast.

it can handle the message rush in at first, but when the index growing
big (index each day) I use Kibana browser logs such as choose long
time window, the ES write in performance drop down immediately.
mesg got accumulate inside the redis channel, then logstash can never
finish it's job inside redis.

any suggestion, guys?

I don't think ES are not good enough for this kind of volume data, but
I don't know what's next step to make it continue stay with high speed
write in.

Thanks!
-Ryan

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Some thoughts to add to what egaumer said:

  • Heap sizes larger than ~30Gb are penalized because they are no longer
    capable of utilizing compressed pointers. This means that heaps over ~30gb
    will use more memory per object than the equivalent sub-30gb heap. The
    usual recommendation is to use a single ElasticSearch node per machine
    since ES is capable of fully exploiting the hardware. However, since you
    have seriously beefy RAM (128Gb per machine), it may be wise to run two ES
    nodes concurrently. Each node would have 30Gb heap (and 30Gb per node
    leftover for OS file cache) while splitting CPU and Disk I/O. Would
    definitely be something to consider, although may not directly be related
    to your throughput problems.
  • Play with the bulk size in Logstash. I'm unsure of your document
    size, but 10,000 may be too large for bulk requests. 100-200mb in bulk is
    a relatively standard number (anywhere from 1000-5000 documents, roughly,
    depending on document size). If the bulk size is too large, you end up
    consuming more memory than necessary since the requests block anyhow, and
    can lead to memory fragmentation.
  • If your use-case is write-heavy, read-rarely, consider adjusting your
    settings accordingly. Increase the refresh_interval time period (60s
    instead of 1s), enable a sane merge-throttling level (something around
    2-4mb/s, depending on your I/O) so that segment merges don't make your node
    suffer, perhaps increase the indexing buffer (defaults to 10% of the heap,
    may be useful to bump to 20%).
  • Kibana uses facets heavily. Depending on your data, facets can be
    very expensive in both speed and memory. Not really a pro or con, just
    something to be aware of.

-Zach

On Friday, April 26, 2013 5:00:00 PM UTC+2, Ryan Qian wrote:

with logstash and elasticsearch we want to continue write 20k Msg/s, the
write performance is* not as expected. story as bellow:*

*HW: *

9 nodes cluster, each one with:
2 socked , 32 threads cpu
128G mem
1T Disks (Raid 1)

SW:
RHEL6.3

logstash 1.1.10
redis as channel
ES 0.20.5

ES memory limit to 65G :

export ES_MIN_MEM=65g
export ES_MAX_MEM=65g

*Logstash index setting: *

shards: 9
replication: 1
_all and _source are disabled

*Redis on 7 of 9 nodes, *
each box have 1 logstash instance read from it's own box's redis channel.
and output to localhost's elasticsearch_http (also tried elasticsearch way)
I adjust the redis input's
batch_count => 2000
threads => 5
and output's
flush_size => 10000

to make it fast.

it can handle the message rush in at first, but when the index growing big
(index each day) I use Kibana browser logs such as choose long time window,
the ES write in performance drop down immediately.
mesg got accumulate inside the redis channel, then logstash can never
finish it's job inside redis.

any suggestion, guys?

I don't think ES are not good enough for this kind of volume data, but I
don't know what's next step to make it continue stay with high speed write
in.

Thanks!
-Ryan

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Thanks Jörg for your advise,

  • 4 x 600G SAS disk with Raid 10 (HW raid) , RHEL6.3 ext4 fs on it.

  • I'm using the lasted SUN jdk, (build 1.7.0_21-b11), but not sure abou
    that "G1 GC", I'm a newbie to Java, any more information for that?

  • any suggestion for ES's logging, I'm kind of lost inside logging.yml :frowning:

On Saturday, April 27, 2013 2:56:25 AM UTC+8, Jörg Prante wrote:

Start with the slowest part of your system to get the most improvement.
Have you taken I/O measurements of your hardware? Can you provide numbers?

Can you explain how you find the numbers for the config you describe?
How is "1T disk on RAID 1" organized? Just 2 disks? What capacity is
"20k msg/s", do you know the average size of a message? And, what are
your expectations how ES should perform?

Ramping up an oversized heap to begin with is most peculiar, but I hope
you know what you are doing. Note that a filled heap of such size might
become a challenge in many aspects. You must watch out for them, I think
you know all of them. You must enable logging in ES to understand the
GC messages so you know why ES works "not es expected". Mostly it will
starve by I/O waits or it is being overwhelmed by GC pauses, slowly
bringing the JVM to a halt and throughput to zero. That is not ES fault

  • it's just a badly configured JVM, or a badly configured disk subsystem.

My recommendation is to switch to latest Java 7 with G1 GC for low
latency GC, and start with a heap to around 8G, and let the other RAM
for the OS activity. See if increasing the heap helps your workload or
not, probably by 4G or 8G steps (16G, 24G, 32G ...) You have to find the
number where performance is best. Also, the ES segment merging config
should be accommodated to handle larger merges (default is 5G).

By turning these knobs (and maybe more), you have to get some intuitive
findings for yourself to get your system balanced out, so that all your
input data stream will get indexed instantaneously, even after having ES
running for a long time period. Each system behaves differently. Do not
trust others numbers, take measurements and tests for yourself.

Of course, you should follow advice from the Logstash community for best
practices how to organize the indexing (rotating indexes etc.)

Jörg

Am 26.04.13 17:00, schrieb Ryan Qian:

with logstash and elasticsearch we want to continue write 20k Msg/s,
the write performance isnot as expected. story as bellow:

*HW: *

 9 nodes cluster, each one with: 
2 socked , 32 threads cpu 
128G mem 
1T Disks (Raid 1) 

SW:
RHEL6.3

logstash 1.1.10 
redis as channel 
ES 0.20.5 

ES memory limit to 65G :

export ES_MIN_MEM=65g 
export ES_MAX_MEM=65g 

*Logstash index setting: *

shards: 9 
replication: 1 
_all and _source are disabled 

*Redis on 7 of 9 nodes, *
each box have 1 logstash instance read from it's own box's redis
channel. and output to localhost's elasticsearch_http (also tried
elasticsearch way)
I adjust the redis input's
batch_count => 2000
threads => 5
and output's
flush_size => 10000

to make it fast.

it can handle the message rush in at first, but when the index growing
big (index each day) I use Kibana browser logs such as choose long
time window, the ES write in performance drop down immediately.
mesg got accumulate inside the redis channel, then logstash can never
finish it's job inside redis.

any suggestion, guys?

I don't think ES are not good enough for this kind of volume data, but
I don't know what's next step to make it continue stay with high speed
write in.

Thanks!
-Ryan

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Thanks Eric!

I did increase the shards to 100 at first, but after learn that: each
shards is a Lucene Instance, I'm geting hesitate to keep so much shards. will
that affect ? such as merge related options need adjust for shards
increase.

-Ryan

On Friday, April 26, 2013 11:25:23 PM UTC+8, Eric Gaumer wrote:

The way to scale to your write capacity is by increasing your shard count.
More shards mean more concurrency.

You mentioned you have 9 nodes and 9 shards so I assume that's one shard
per node? If you have a CPU with 32 cores, then running only one shard
doesn't fully utilize the CPU. Try running 10 shards on each node for a
total of 90 shards. This will break the index into smaller segments and
allow for better throughput.

The general idea is to run some performance tests to identify the capacity
of a single node with your given hardware configuration (i.e., what's the
optimal number of shards to achieve your target throughput).

Since your talking about logging, you'll also want to roll the logs every
so often into a new index.

-Eric

On Friday, April 26, 2013 11:10:26 AM UTC-4, Ryan Qian wrote:

just share another error as bellow, which I always encounter this when
using elasticsearch output way,

:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.transport.RemoteTransportException:

this will make logstash refuse to work. (not always but enough buzz)
I encounter it when using LG0.1.9 pair with ES0.20.2, then I upgrade
to LG0.1.10 pair with ES0.20.5. this error keep on generate.

so I have to use elasticsearch_http as output way, though it's slow, but
at least it works

On Friday, April 26, 2013 11:00:00 PM UTC+8, Ryan Qian wrote:

with logstash and elasticsearch we want to continue write 20k Msg/s, the
write performance is* not as expected. story as bellow:*

*HW: *

9 nodes cluster, each one with:
2 socked , 32 threads cpu
128G mem
1T Disks (Raid 1)

SW:
RHEL6.3

logstash 1.1.10
redis as channel
ES 0.20.5

ES memory limit to 65G :

export ES_MIN_MEM=65g
export ES_MAX_MEM=65g

*Logstash index setting: *

shards: 9
replication: 1
_all and _source are disabled

*Redis on 7 of 9 nodes, *
each box have 1 logstash instance read from it's own box's redis
channel. and output to localhost's elasticsearch_http (also tried
elasticsearch way)
I adjust the redis input's
batch_count => 2000
threads => 5
and output's
flush_size => 10000

to make it fast.

it can handle the message rush in at first, but when the index growing
big (index each day) I use Kibana browser logs such as choose long time
window, the ES write in performance drop down immediately.
mesg got accumulate inside the redis channel, then logstash can never
finish it's job inside redis.

any suggestion, guys?

I don't think ES are not good enough for this kind of volume data, but I
don't know what's next step to make it continue stay with high speed write
in.

Thanks!
-Ryan

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Have you checked RAID 10 performance? My experience is that RAID 10
writes are weak but you need high write I/O capacity for ES log
indexing. With 4 disks you will not get faster than a single disk (which
is around 100 MB/sec). With RAID 0 you get physical writes around 400
MB/sec and with SSD even more (depending on SSD controller 800 MB/sec).
Note that with ES replica level, you already have redundancy, so if RAID
0 fails, a node will fail, but ES will continue with the rest of the
nodes. With RAID 10 failure, the node continues but in degraded mode,
which may or may not influence the whole cluster I/O performance.

ES JVM GC logging is like any JVM GC logging, see settings and comments
in $ES_HOME/bin/elasticsearch.in.sh

If you run into trouble with GC - you will see in the logs GC alerts -
you might consider G1 GC instead of default CMS GC at a later time.
Without diagnostics of real incidents, I can't give recommendations.

Jörg

Am 27.04.13 05:16, schrieb Ryan Qian:

Thanks Jörg for your advise,

  • 4 x 600G SAS disk with Raid 10 (HW raid) , RHEL6.3 ext4 fs on it.

  • I'm using the lasted SUN jdk, (build 1.7.0_21-b11), but not sure
    abou that "G1 GC", I'm a newbie to Java, any more information for that?

  • any suggestion for ES's logging, I'm kind of lost inside
    logging.yml :frowning:

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Thanks Zach, follow this threads guys suggestion, I did follow:

  • I just change the HEAP size (which min and max ) from 65G to 30G
  • change from* elasticssearch_http* to* elasticsearch *output way of
    logstash, which don't need configure the bulk size, and as I test it's fast
    than http way.
  • put the setting as bellow, let me test whether this can help.
  • BTW, at first I set the max_merged_segment to 10g. as usual the ES
    speed is ok to handle the log rush in, but as the index grow to 70G (this
    index will grow to 350G in 24hr), then the ES write in speed drop
    immediately. after a while the speed back in position, but it can never
    finished the log events which inside the redis queue. if I can see
    what's the ES busy for when the index reach 70G
    will be great, any suggestion
    for ES logging configuration?
    * *

"settings" : {
"index.analysis.analyzer.default.type": "simple",
"index.refresh_interval": "60s",
"index.query.default_field" : "@message",
"index.auto_expand_replicas": false,
"index.merge.policy.max_merged_segment": "60g",
"number_of_shards" : 50,
"number_of_replicas" : 1
},
"mappings" : {
"intdnstype" : {
"_all": { "enabled": false },
"_source": { "compress": false },

On Saturday, April 27, 2013 6:36:19 AM UTC+8, Zachary Tong wrote:

Some thoughts to add to what egaumer said:

  • Heap sizes larger than ~30Gb are penalized because they are no
    longer capable of utilizing compressed pointers. This means that heaps
    over ~30gb will use more memory per object than the equivalent sub-30gb
    heap. The usual recommendation is to use a single ElasticSearch node per
    machine since ES is capable of fully exploiting the hardware. However,
    since you have seriously beefy RAM (128Gb per machine), it may be wise to
    run two ES nodes concurrently. Each node would have 30Gb heap (and 30Gb
    per node leftover for OS file cache) while splitting CPU and Disk I/O.
    Would definitely be something to consider, although may not directly be
    related to your throughput problems.
  • Play with the bulk size in Logstash. I'm unsure of your document
    size, but 10,000 may be too large for bulk requests. 100-200mb in bulk is
    a relatively standard number (anywhere from 1000-5000 documents, roughly,
    depending on document size). If the bulk size is too large, you end up
    consuming more memory than necessary since the requests block anyhow, and
    can lead to memory fragmentation.
  • If your use-case is write-heavy, read-rarely, consider adjusting
    your settings accordingly. Increase the refresh_interval time period (60s
    instead of 1s), enable a sane merge-throttling level (something around
    2-4mb/s, depending on your I/O) so that segment merges don't make your node
    suffer, perhaps increase the indexing buffer (defaults to 10% of the heap,
    may be useful to bump to 20%).
  • Kibana uses facets heavily. Depending on your data, facets can be
    very expensive in both speed and memory. Not really a pro or con, just
    something to be aware of.

-Zach

On Friday, April 26, 2013 5:00:00 PM UTC+2, Ryan Qian wrote:

with logstash and elasticsearch we want to continue write 20k Msg/s, the
write performance is* not as expected. story as bellow:*

*HW: *

9 nodes cluster, each one with:
2 socked , 32 threads cpu
128G mem
1T Disks (Raid 1)

SW:
RHEL6.3

logstash 1.1.10
redis as channel
ES 0.20.5

ES memory limit to 65G :

export ES_MIN_MEM=65g
export ES_MAX_MEM=65g

*Logstash index setting: *

shards: 9
replication: 1
_all and _source are disabled

*Redis on 7 of 9 nodes, *
each box have 1 logstash instance read from it's own box's redis
channel. and output to localhost's elasticsearch_http (also tried
elasticsearch way)
I adjust the redis input's
batch_count => 2000
threads => 5
and output's
flush_size => 10000

to make it fast.

it can handle the message rush in at first, but when the index growing
big (index each day) I use Kibana browser logs such as choose long time
window, the ES write in performance drop down immediately.
mesg got accumulate inside the redis channel, then logstash can never
finish it's job inside redis.

any suggestion, guys?

I don't think ES are not good enough for this kind of volume data, but I
don't know what's next step to make it continue stay with high speed write
in.

Thanks!
-Ryan

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

In all likelihood, that 70G "threshold" is the result of one or more large
merges occurring. Small merges are usually quick and unnoticed, but once
enough large segments have accumulated, a merge of several "older
generation" segments can result in substantial disk and CPU demand. The Segment
Stats APIhttp://www.elasticsearch.org/guide/reference/api/admin-indices-segments/can give you more insight there. Merge policy tuning is complicated and
very advanced...I don't know enough to give good advice on settings.

Generally, indexing throughput is limited by Disk I/O. Keep an eye on your
disk subsystem to see if it is being saturated (iotop, etc).

On a similar note, consider re-enabling compression on the _source field.
Since Disk I/O is usually the bottleneck with high-throughput indexing,
the CPU overhead of compression is minimal compared to the savings in
writing fewer bytes to disk. I would definitely re-enable compression and
test to see if it improves performance.

-Zach

On Saturday, April 27, 2013 11:07:04 AM UTC+2, Ryan Qian wrote:

Thanks Zach, follow this threads guys suggestion, I did follow:

  • I just change the HEAP size (which min and max ) from 65G to 30G
  • change from* elasticssearch_http* to* elasticsearch *output way of
    logstash, which don't need configure the bulk size, and as I test it's fast
    than http way.
  • put the setting as bellow, let me test whether this can help.
  • BTW, at first I set the max_merged_segment to 10g. as usual the ES
    speed is ok to handle the log rush in, but as the index grow to 70G (this
    index will grow to 350G in 24hr), then the ES write in speed drop
    immediately. after a while the speed back in position, but it can never
    finished the log events which inside the redis queue. if I can see
    what's the ES busy for when the index reach 70G
    will be great, any suggestion
    for ES logging configuration?
    * *

"settings" : {
"index.analysis.analyzer.default.type": "simple",
"index.refresh_interval": "60s",
"index.query.default_field" : "@message",
"index.auto_expand_replicas": false,
"index.merge.policy.max_merged_segment": "60g",
"number_of_shards" : 50,
"number_of_replicas" : 1
},
"mappings" : {
"intdnstype" : {
"_all": { "enabled": false },
"_source": { "compress": false },

On Saturday, April 27, 2013 6:36:19 AM UTC+8, Zachary Tong wrote:

Some thoughts to add to what egaumer said:

  • Heap sizes larger than ~30Gb are penalized because they are no
    longer capable of utilizing compressed pointers. This means that heaps
    over ~30gb will use more memory per object than the equivalent sub-30gb
    heap. The usual recommendation is to use a single ElasticSearch node per
    machine since ES is capable of fully exploiting the hardware. However,
    since you have seriously beefy RAM (128Gb per machine), it may be wise to
    run two ES nodes concurrently. Each node would have 30Gb heap (and 30Gb
    per node leftover for OS file cache) while splitting CPU and Disk I/O.
    Would definitely be something to consider, although may not directly be
    related to your throughput problems.
  • Play with the bulk size in Logstash. I'm unsure of your document
    size, but 10,000 may be too large for bulk requests. 100-200mb in bulk is
    a relatively standard number (anywhere from 1000-5000 documents, roughly,
    depending on document size). If the bulk size is too large, you end up
    consuming more memory than necessary since the requests block anyhow, and
    can lead to memory fragmentation.
  • If your use-case is write-heavy, read-rarely, consider adjusting
    your settings accordingly. Increase the refresh_interval time period (60s
    instead of 1s), enable a sane merge-throttling level (something around
    2-4mb/s, depending on your I/O) so that segment merges don't make your node
    suffer, perhaps increase the indexing buffer (defaults to 10% of the heap,
    may be useful to bump to 20%).
  • Kibana uses facets heavily. Depending on your data, facets can be
    very expensive in both speed and memory. Not really a pro or con, just
    something to be aware of.

-Zach

On Friday, April 26, 2013 5:00:00 PM UTC+2, Ryan Qian wrote:

with logstash and elasticsearch we want to continue write 20k Msg/s, the
write performance is* not as expected. story as bellow:*

*HW: *

9 nodes cluster, each one with:
2 socked , 32 threads cpu
128G mem
1T Disks (Raid 1)

SW:
RHEL6.3

logstash 1.1.10
redis as channel
ES 0.20.5

ES memory limit to 65G :

export ES_MIN_MEM=65g
export ES_MAX_MEM=65g

*Logstash index setting: *

shards: 9
replication: 1
_all and _source are disabled

*Redis on 7 of 9 nodes, *
each box have 1 logstash instance read from it's own box's redis
channel. and output to localhost's elasticsearch_http (also tried
elasticsearch way)
I adjust the redis input's
batch_count => 2000
threads => 5
and output's
flush_size => 10000

to make it fast.

it can handle the message rush in at first, but when the index growing
big (index each day) I use Kibana browser logs such as choose long time
window, the ES write in performance drop down immediately.
mesg got accumulate inside the redis channel, then logstash can never
finish it's job inside redis.

any suggestion, guys?

I don't think ES are not good enough for this kind of volume data, but I
don't know what's next step to make it continue stay with high speed write
in.

Thanks!
-Ryan

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

This is helpful Jörg, Thanks. expectably for that
elasticsearch.in.shscript make me realize I didn't set the JVM HEAP
size correctly! my ES is
still using the default 1024m Heap size, that's the root cause of the
slow performance, what at silly mistake. and this did give a lesson about
Xms and Xmx option of JVM. it's just there showing each time I use "ps
-ef", but I didn't take it seriously, until I look inside that in.sh
script.

and the second silly mistake is I'm using the laste 1.7 Sun JDK, but ES is
suggesting 1.6 (just show on that Installation page)

after correct above two mistake, currently our ES cluster is running
smoothly and sustainable under continue high write volumes.

Thanks!

On Sat, Apr 27, 2013 at 4:53 PM, Jörg Prante joergprante@gmail.com wrote:

Have you checked RAID 10 performance? My experience is that RAID 10 writes
are weak but you need high write I/O capacity for ES log indexing. With 4
disks you will not get faster than a single disk (which is around 100
MB/sec). With RAID 0 you get physical writes around 400 MB/sec and with SSD
even more (depending on SSD controller 800 MB/sec). Note that with ES
replica level, you already have redundancy, so if RAID 0 fails, a node will
fail, but ES will continue with the rest of the nodes. With RAID 10
failure, the node continues but in degraded mode, which may or may not
influence the whole cluster I/O performance.

ES JVM GC logging is like any JVM GC logging, see settings and comments in
$ES_HOME/bin/elasticsearch.in.**sh http://elasticsearch.in.sh

If you run into trouble with GC - you will see in the logs GC alerts - you
might consider G1 GC instead of default CMS GC at a later time. Without
diagnostics of real incidents, I can't give recommendations.

Jörg

Am 27.04.13 05:16, schrieb Ryan Qian:

Thanks Jörg for your advise,

  • 4 x 600G SAS disk with Raid 10 (HW raid) , RHEL6.3 ext4 fs on it.

  • I'm using the lasted SUN jdk, (build 1.7.0_21-b11), but not sure abou
    that "G1 GC", I'm a newbie to Java, any more information for that?

  • any suggestion for ES's logging, I'm kind of lost inside logging.yml
    :frowning:

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/**
topic/elasticsearch/jMisp-**BHpCQ/unsubscribe?hl=en-UShttps://groups.google.com/d/topic/elasticsearch/jMisp-BHpCQ/unsubscribe?hl=en-US
.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@**googlegroups.comelasticsearch%2Bunsubscribe@googlegroups.com
.
For more options, visit https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
.

--
Regards,
Qian Yongchao ryan.qian@gmail.com

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Thanks Zach, this reply is helpful, inside another reply to Jörg, I
detailed the reason why my ES cluster is slow for high volume write, 2
silly mistake there. HEAP size and JDK reason.

and yes, I'm tweaking the threshold for less (10g) after google what does
that merge action is actually doing by luncene..
about IO, the HW Raid 10 of 4 SAS disk is actually doing well as got these
boxes test, with 90% utils it can reach 260M write in speed, so I think it
can working not bad with ES's load, especially I have 128G mem there for FS
caching help. and if I can have SSD will be great, but currently I can't.

I will try re-enabling the compression for _source filed. see whether it
works better.

Thanks again for all you guy's kindly help ! though the mistake is silly,
but before I got there, I did learned more when trying to searching for
other solution.

Regarding and Have a good Day!

On Sun, Apr 28, 2013 at 8:48 PM, Zachary Tong zacharyjtong@gmail.comwrote:

In all likelihood, that 70G "threshold" is the result of one or more large
merges occurring. Small merges are usually quick and unnoticed, but once
enough large segments have accumulated, a merge of several "older
generation" segments can result in substantial disk and CPU demand. The Segment
Stats APIhttp://www.elasticsearch.org/guide/reference/api/admin-indices-segments/can give you more insight there. Merge policy tuning is complicated and
very advanced...I don't know enough to give good advice on settings.

Generally, indexing throughput is limited by Disk I/O. Keep an eye on
your disk subsystem to see if it is being saturated (iotop, etc).

On a similar note, consider re-enabling compression on the _source field.
Since Disk I/O is usually the bottleneck with high-throughput indexing,
the CPU overhead of compression is minimal compared to the savings in
writing fewer bytes to disk. I would definitely re-enable compression and
test to see if it improves performance.

-Zach

On Saturday, April 27, 2013 11:07:04 AM UTC+2, Ryan Qian wrote:

Thanks Zach, follow this threads guys suggestion, I did follow:

  • I just change the HEAP size (which min and max ) from 65G to 30G
  • change from* elasticssearch_http* to* elasticsearch *output way of
    logstash, which don't need configure the bulk size, and as I test it's fast
    than http way.
  • put the setting as bellow, let me test whether this can help.
  • BTW, at first I set the max_merged_segment to 10g. as usual the ES
    speed is ok to handle the log rush in, but as the index grow to 70G (this
    index will grow to 350G in 24hr), then the ES write in speed drop
    immediately. after a while the speed back in position, but it can never
    finished the log events which inside the redis queue. if I can see
    what's the ES busy for when the index reach 70G
    will be great, any suggestion
    for ES logging configuration?
    * *

"settings" : {
"index.analysis.analyzer.**default.type": "simple",
"index.refresh_interval": "60s",
"index.query.default_field" : "@message",
"index.auto_expand_replicas": false,
"index.merge.policy.max_**merged_segment": "60g",
"number_of_shards" : 50,
"number_of_replicas" : 1
},
"mappings" : {
"intdnstype" : {
"_all": { "enabled": false },
"_source": { "compress": false },

On Saturday, April 27, 2013 6:36:19 AM UTC+8, Zachary Tong wrote:

Some thoughts to add to what egaumer said:

  • Heap sizes larger than ~30Gb are penalized because they are no
    longer capable of utilizing compressed pointers. This means that heaps
    over ~30gb will use more memory per object than the equivalent sub-30gb
    heap. The usual recommendation is to use a single ElasticSearch node per
    machine since ES is capable of fully exploiting the hardware. However,
    since you have seriously beefy RAM (128Gb per machine), it may be wise to
    run two ES nodes concurrently. Each node would have 30Gb heap (and 30Gb
    per node leftover for OS file cache) while splitting CPU and Disk I/O.
    Would definitely be something to consider, although may not directly be
    related to your throughput problems.
  • Play with the bulk size in Logstash. I'm unsure of your document
    size, but 10,000 may be too large for bulk requests. 100-200mb in bulk is
    a relatively standard number (anywhere from 1000-5000 documents, roughly,
    depending on document size). If the bulk size is too large, you end up
    consuming more memory than necessary since the requests block anyhow, and
    can lead to memory fragmentation.
  • If your use-case is write-heavy, read-rarely, consider adjusting
    your settings accordingly. Increase the refresh_interval time period (60s
    instead of 1s), enable a sane merge-throttling level (something around
    2-4mb/s, depending on your I/O) so that segment merges don't make your node
    suffer, perhaps increase the indexing buffer (defaults to 10% of the heap,
    may be useful to bump to 20%).
  • Kibana uses facets heavily. Depending on your data, facets can be
    very expensive in both speed and memory. Not really a pro or con, just
    something to be aware of.

-Zach

On Friday, April 26, 2013 5:00:00 PM UTC+2, Ryan Qian wrote:

with logstash and elasticsearch we want to continue write 20k Msg/s,
the write performance is* not as expected. story as bellow:*

*HW: *

9 nodes cluster, each one with:
2 socked , 32 threads cpu
128G mem
1T Disks (Raid 1)

SW:
RHEL6.3

logstash 1.1.10
redis as channel
ES 0.20.5

ES memory limit to 65G :

export ES_MIN_MEM=65g
export ES_MAX_MEM=65g

*Logstash index setting: *

shards: 9
replication: 1
_all and _source are disabled

*Redis on 7 of 9 nodes, *
each box have 1 logstash instance read from it's own box's redis
channel. and output to localhost's elasticsearch_http (also tried
elasticsearch way)
I adjust the redis input's
batch_count => 2000
threads => 5
and output's
flush_size => 10000

to make it fast.

it can handle the message rush in at first, but when the index growing
big (index each day) I use Kibana browser logs such as choose long time
window, the ES write in performance drop down immediately.
mesg got accumulate inside the redis channel, then logstash can never
finish it's job inside redis.

any suggestion, guys?

I don't think ES are not good enough for this kind of volume data, but
I don't know what's next step to make it continue stay with high speed
write in.

Thanks!
-Ryan

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/jMisp-BHpCQ/unsubscribe?hl=en-US
.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
Regards,
Qian Yongchao ryan.qian@gmail.com

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.