Index creation performance


(Vahid) #1

Hi all,
I'm doing some benchmarks to measure the performance of ES.

1: After storing about 1,200,000 docs(~25GB) the performance of indexing is
decreasing, so I tried to create a new index after indexing each 1.200,000
docs.
Each index configured to have 100 shards. Now after creating about 10
index(1000 shards), index creation performance decreases so the indexing
threads are waiting for new index.

2: When I run the application just to create the indexes, the performance
of index creation is ok but when at the same time it stores the data and
creates the indexes I see the performance problem.

I want to know that am I doing the right approach and if so how could I
solve this problem.
Thanks in advance,
Vahid

--


(jdd) #2

Vahid,
100 shards per index is too many. How about trying one shard per node in
the cluster?

Thanks,
Jaideep

On Mon, Sep 24, 2012 at 2:55 AM, Vahid vhasani57@gmail.com wrote:

1,200,000

--
Jaideep Dhok

--


(Vahid) #3

Hi Jaideep, thanks for your reply.
I'm running one ES instance on a single node, first I want to make sure
about the approach to have max performance then I will apply the
configuration to the cluster.
By creating an index with only one shards soon the performance of data
indexing will be decreased, In addition, ES index creation speed get so
slow after creating about 2000 index(one shard).
During the last tests which I've run, one index with 100 shards gave me the
best performance for indexing 2.4 m records(doc size ~ 22kb and the speed
was 2550 rec/s), but for indexing more data the performance is
decreasing(final records count is 1 billion).

Thanks,
Vahid

On Monday, September 24, 2012 3:05:50 AM UTC+2, jaideep dhok wrote:

Vahid,
100 shards per index is too many. How about trying one shard per node in
the cluster?

Thanks,
Jaideep

On Mon, Sep 24, 2012 at 2:55 AM, Vahid <vhas...@gmail.com <javascript:>>wrote:

1,200,000

--
Jaideep Dhok

--


(David Pilato) #4

Problem is that with 100 shards on a single node (100 lucene instances per node)
will give you High IO requests. When your index is increasing (more and more
docs), Read and Write operations will cost you more.

I'm pretty sure that if you run the same test on 10 nodes (10 shards per node -
with replica=0), you will get best results.

What I want to say here is that it's really hard to make assumptions based on
what you can see on a single node. To tune ES, I recommand to do it on the
target platform.
It's not a linear scale approach.

David.

Le 24 septembre 2012 à 09:28, Vahid vhasani57@gmail.com a écrit :

Hi Jaideep, thanks for your reply.
I'm running one ES instance on a single node, first I want to make sure about
the approach to have max performance then I will apply the configuration to
the cluster.
By creating an index with only one shards soon the performance of data
indexing will be decreased, In addition, ES index creation speed get so slow
after creating about 2000 index(one shard).
During the last tests which I've run, one index with 100 shards gave me the
best performance for indexing 2.4 m records(doc size ~ 22kb and the speed was
2550 rec/s), but for indexing more data the performance is decreasing(final
records count is 1 billion).

Thanks,
Vahid

On Monday, September 24, 2012 3:05:50 AM UTC+2, jaideep dhok wrote:

Vahid,
100 shards per index is too many. How about trying one shard per node in
the cluster?

Thanks,
Jaideep

On Mon, Sep 24, 2012 at 2:55 AM, Vahid vhas...@gmail.com wrote:
> > > 1,200,000

--
Jaideep Dhok

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--


(Vahid) #5

Thank you David,
With 10 nodes you mean 10 ES instance on 10 machines? if so, certainly the
result would be much more better(more hardware resources). For me the
problem is that I have no measurement of ES performance.
At first we were running the tests on the cluster but in the cluster there
are lots of factors which affect the performance(like networking...) and
performance was not acceptable and finding the bottlenecks was difficult,
so I decided to find some measurements on a single node to make sure about
the ES and our indexing approach and then run the tests on the
cluster(anyway I do agree with you that it's not a linear scale approach).

I want to find a solution to avoid decreasing the indexing performance, so
I need to know that how many docs each shards can store without performance
problem and if the max capacity reached what should I do.

Vahid.

On Mon, Sep 24, 2012 at 9:59 AM, David Pilato david@pilato.fr wrote:

**
Problem is that with 100 shards on a single node (100 lucene instances
per node) will give you High IO requests. When your index is increasing
(more and more docs), Read and Write operations will cost you more.

I'm pretty sure that if you run the same test on 10 nodes (10 shards per
node - with replica=0), you will get best results.

What I want to say here is that it's really hard to make assumptions
based on what you can see on a single node. To tune ES, I recommand to do
it on the target platform.
It's not a linear scale approach.

David.

Le 24 septembre 2012 à 09:28, Vahid vhasani57@gmail.com a écrit :

Hi Jaideep, thanks for your reply.
I'm running one ES instance on a single node, first I want to make sure
about the approach to have max performance then I will apply the
configuration to the cluster.
By creating an index with only one shards soon the performance of data
indexing will be decreased, In addition, ES index creation speed get so
slow after creating about 2000 index(one shard).
During the last tests which I've run, one index with 100 shards gave me
the best performance for indexing 2.4 m records(doc size ~ 22kb and the
speed was 2550 rec/s), but for indexing more data the performance is
decreasing(final records count is 1 billion).

Thanks,
Vahid

On Monday, September 24, 2012 3:05:50 AM UTC+2, jaideep dhok wrote:

Vahid,
100 shards per index is too many. How about trying one shard per node in
the cluster?

Thanks,
Jaideep

On Mon, Sep 24, 2012 at 2:55 AM, Vahid vhas...@gmail.com wrote:

1,200,000

--
Jaideep Dhok

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--


(Vahid) #6

No reply?[?]

On Mon, Sep 24, 2012 at 10:44 AM, Vahid Hasani vhasani57@gmail.com wrote:

Thank you David,
With 10 nodes you mean 10 ES instance on 10 machines? if so, certainly the
result would be much more better(more hardware resources). For me the
problem is that I have no measurement of ES performance.
At first we were running the tests on the cluster but in the cluster there
are lots of factors which affect the performance(like networking...) and
performance was not acceptable and finding the bottlenecks was difficult,
so I decided to find some measurements on a single node to make sure about
the ES and our indexing approach and then run the tests on the
cluster(anyway I do agree with you that it's not a linear scale approach).

I want to find a solution to avoid decreasing the indexing performance, so
I need to know that how many docs each shards can store without performance
problem and if the max capacity reached what should I do.

Vahid.

On Mon, Sep 24, 2012 at 9:59 AM, David Pilato david@pilato.fr wrote:

**
Problem is that with 100 shards on a single node (100 lucene instances
per node) will give you High IO requests. When your index is increasing
(more and more docs), Read and Write operations will cost you more.

I'm pretty sure that if you run the same test on 10 nodes (10 shards per
node - with replica=0), you will get best results.

What I want to say here is that it's really hard to make assumptions
based on what you can see on a single node. To tune ES, I recommand to do
it on the target platform.
It's not a linear scale approach.

David.

Le 24 septembre 2012 à 09:28, Vahid vhasani57@gmail.com a écrit :

Hi Jaideep, thanks for your reply.
I'm running one ES instance on a single node, first I want to make sure
about the approach to have max performance then I will apply the
configuration to the cluster.
By creating an index with only one shards soon the performance of data
indexing will be decreased, In addition, ES index creation speed get so
slow after creating about 2000 index(one shard).
During the last tests which I've run, one index with 100 shards gave me
the best performance for indexing 2.4 m records(doc size ~ 22kb and the
speed was 2550 rec/s), but for indexing more data the performance is
decreasing(final records count is 1 billion).

Thanks,
Vahid

On Monday, September 24, 2012 3:05:50 AM UTC+2, jaideep dhok wrote:

Vahid,
100 shards per index is too many. How about trying one shard per node in
the cluster?

Thanks,
Jaideep

On Mon, Sep 24, 2012 at 2:55 AM, Vahid vhas...@gmail.com wrote:

1,200,000

--
Jaideep Dhok

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--


(David Pilato) #7

Hey Vahid

Don't expect any ETA on the mailing list...

Yes that's what I was meaning.
What you can do, is probably to inject in a signle node (1 shard, 0 replica) and
see how much docs one of your single node can handle?
Then, perform the same test with 2 shards, 0 replica on the same node.

Then, add a second node and perform the same test with 2 shards, 0 replica.
Then, perform the same test with 4 shards, 0 replica.

I think, you will be able to find the right numbers for your hardware.

BTW, give a try to the wonderful Bigdesk plugin. It will help you to find some
clues about IO, Memory, ...

David.

Le 24 septembre 2012 à 13:32, Vahid Hasani vhasani57@gmail.com a écrit :

No reply?

On Mon, Sep 24, 2012 at 10:44 AM, Vahid Hasani <vhasani57@gmail.com
mailto:vhasani57@gmail.com > wrote:

Thank you David,
With 10 nodes you mean 10 ES instance on 10 machines? if so, certainly
the result would be much more better(more hardware resources). For me the
problem is that I have no measurement of ES performance.
At first we were running the tests on the cluster but in the cluster
there are lots of factors which affect the performance(like networking...)
and performance was not acceptable and finding the bottlenecks was
difficult, so I decided to find some measurements on a single node to make
sure about the ES and our indexing approach and then run the tests on the
cluster(anyway I do agree with you that it's not a linear scale approach).

I want to find a solution to avoid decreasing the indexing performance,
so I need to know that how many docs each shards can store without
performance problem and if the max capacity reached what should I do.

Vahid.

On Mon, Sep 24, 2012 at 9:59 AM, David Pilato <david@pilato.fr
mailto:david@pilato.fr > wrote:
> > > Problem is that with 100 shards on a single node (100 lucene
> > > instances per node) will give you High IO requests. When your
> > > index is increasing (more and more docs), Read and Write
> > > operations will cost you more.

 I'm pretty sure that if you run the same test on 10 nodes (10 shards

per node - with replica=0), you will get best results.

 What I want to say here is that it's really hard to make assumptions

based on what you can see on a single node. To tune ES, I recommand to do
it on the target platform.
It's not a linear scale approach.

 David.


 Le 24 septembre 2012 à 09:28, Vahid < vhasani57@gmail.com

mailto:vhasani57@gmail.com > a écrit :

  > > > > Hi Jaideep, thanks for your reply.
  I'm running one ES instance on a single node, first I want to make

sure about the approach to have max performance then I will apply the
configuration to the cluster.
By creating an index with only one shards soon the performance of
data indexing will be decreased, In addition, ES index creation speed
get so slow after creating about 2000 index(one shard).
During the last tests which I've run, one index with 100 shards
gave me the best performance for indexing 2.4 m records(doc size ~ 22kb
and the speed was 2550 rec/s), but for indexing more data the
performance is decreasing(final records count is 1 billion).

  Thanks,
  Vahid

  On Monday, September 24, 2012 3:05:50 AM UTC+2, jaideep dhok

wrote:
> > > > > Vahid,

    100 shards per index is too many. How about trying one shard

per node in the cluster?

    Thanks,
    Jaideep

    On Mon, Sep 24, 2012 at 2:55 AM, Vahid <vhas...@gmail.com>

wrote:
> > > > > > 1,200,000

    > > > > > 
    --
    Jaideep Dhok
  > > > > 
  --



 > > > 
 --
 David Pilato
 http://www.scrutmydocs.org/ <http://www.scrutmydocs.org/>
 http://dev.david.pilato.fr/ <http://dev.david.pilato.fr/>
 Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs



 --

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--


(Vahid) #8

Thank you David,

@Don't expect any ETA on the mailing list...
Yes,Definitely you are right, I'm a little bit under pressure...

That's the exact way I've taken and these are my findings of running some
tests on a machine with these config(32g Ram,8 core and 143Mb/s hdd speed):
16g Ram for ES(Xmx=Xmn), bootstrap.mlockall: true, added jvm option:
-server, -XX:+AggressiveOpts

for adding 500,000 docs the results were:
shards: 5, 1684 docs/s
shards: 50, 2336 Doc/s
shards:100, 2558
I ran the tests with mores shards but the best for me was 100, so I decided
to set the shards no to 100 and try with other parameters.

Whats happened with more data: for 2.400.000 records the average speed was
1600 Doc/s so I started to creating new indexes after each 500,000 records
and the indexing speed increased to 2558 Doc/s.
Now the problem is that the final data is much more than these numbers and
ES after creating 20 index (let say 2000 shards) decreases so much which in
some cases creating an index takes 10 minutes.
Then I was thinking that this is the maximum speed and capacity of the ES
and I must add more hardware, so I asked here to make sure.

Best,
Vahid

On Mon, Sep 24, 2012 at 2:57 PM, David Pilato david@pilato.fr wrote:

**
Hey Vahid

Don't expect any ETA on the mailing list...

Yes that's what I was meaning.
What you can do, is probably to inject in a signle node (1 shard, 0
replica) and see how much docs one of your single node can handle?
Then, perform the same test with 2 shards, 0 replica on the same node.

Then, add a second node and perform the same test with 2 shards, 0
replica.
Then, perform the same test with 4 shards, 0 replica.

I think, you will be able to find the right numbers for your hardware.

BTW, give a try to the wonderful Bigdesk plugin. It will help you to find
some clues about IO, Memory, ...

David.

Le 24 septembre 2012 à 13:32, Vahid Hasani vhasani57@gmail.com a
écrit :

No reply?

On Mon, Sep 24, 2012 at 10:44 AM, Vahid Hasani vhasani57@gmail.comwrote:

Thank you David,
With 10 nodes you mean 10 ES instance on 10 machines? if so, certainly the
result would be much more better(more hardware resources). For me the
problem is that I have no measurement of ES performance.
At first we were running the tests on the cluster but in the cluster there
are lots of factors which affect the performance(like networking...) and
performance was not acceptable and finding the bottlenecks was difficult,
so I decided to find some measurements on a single node to make sure about
the ES and our indexing approach and then run the tests on the
cluster(anyway I do agree with you that it's not a linear scale approach).

I want to find a solution to avoid decreasing the indexing performance, so
I need to know that how many docs each shards can store without performance
problem and if the max capacity reached what should I do.

Vahid.

On Mon, Sep 24, 2012 at 9:59 AM, David Pilato david@pilato.fr wrote:

Problem is that with 100 shards on a single node (100 lucene instances
per node) will give you High IO requests. When your index is increasing
(more and more docs), Read and Write operations will cost you more.

I'm pretty sure that if you run the same test on 10 nodes (10 shards per
node - with replica=0), you will get best results.

What I want to say here is that it's really hard to make assumptions
based on what you can see on a single node. To tune ES, I recommand to do
it on the target platform.
It's not a linear scale approach.

David.

Le 24 septembre 2012 à 09:28, Vahid < vhasani57@gmail.com> a écrit :

Hi Jaideep, thanks for your reply.
I'm running one ES instance on a single node, first I want to make sure
about the approach to have max performance then I will apply the
configuration to the cluster.
By creating an index with only one shards soon the performance of data
indexing will be decreased, In addition, ES index creation speed get so
slow after creating about 2000 index(one shard).
During the last tests which I've run, one index with 100 shards gave me
the best performance for indexing 2.4 m records(doc size ~ 22kb and the
speed was 2550 rec/s), but for indexing more data the performance is
decreasing(final records count is 1 billion).

Thanks,
Vahid

On Monday, September 24, 2012 3:05:50 AM UTC+2, jaideep dhok wrote:

Vahid,
100 shards per index is too many. How about trying one shard per node in
the cluster?

Thanks,
Jaideep

On Mon, Sep 24, 2012 at 2:55 AM, Vahid vhas...@gmail.com wrote:

1,200,000

--
Jaideep Dhok

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--


(BillyEm) #9

Vahid I think you can benefit from some alternate advice. First it is
rarely necessary to start tuning on a target production machine as the
process is fairly well-defined and as documented in 'A Picture of Search'
applying queuing theory to capacity planning can illuminate service times
that extrapolate exceedingly well to larger machine environments. That
said:

I suggest you review Mike's blog posts on Merge Policy and indexing
performance at:

the following is just a snippet from one of several related posts.-->

"In fact, from the viewpoint of the MergePolicy, this is really a game
against a sneaky opponent who randomly makes sudden changes to the index,
such as flushing new segments or applying new deletions. If the opponent is
well behaved, it'll add equal sized, large segments, which are easy to
merge well, as was the case in the above video; but that's a really easy
game, like playing tic-tack-toe against a 3 year old."

regards,
b

On Monday, September 24, 2012 1:06:01 PM UTC-4, Vahid wrote:

Thank you David,

@Don't expect any ETA on the mailing list...
Yes,Definitely you are right, I'm a little bit under pressure...

That's the exact way I've taken and these are my findings of running some
tests on a machine with these config(32g Ram,8 core and 143Mb/s hdd speed):
16g Ram for ES(Xmx=Xmn), bootstrap.mlockall: true, added jvm option:
-server, -XX:+AggressiveOpts

for adding 500,000 docs the results were:
shards: 5, 1684 docs/s
shards: 50, 2336 Doc/s
shards:100, 2558
I ran the tests with mores shards but the best for me was 100, so I
decided to set the shards no to 100 and try with other parameters.

Whats happened with more data: for 2.400.000 records the average speed was
1600 Doc/s so I started to creating new indexes after each 500,000 records
and the indexing speed increased to 2558 Doc/s.
Now the problem is that the final data is much more than these numbers and
ES after creating 20 index (let say 2000 shards) decreases so much which in
some cases creating an index takes 10 minutes.
Then I was thinking that this is the maximum speed and capacity of the ES
and I must add more hardware, so I asked here to make sure.

Best,
Vahid

On Mon, Sep 24, 2012 at 2:57 PM, David Pilato <da...@pilato.fr<javascript:>

wrote:

**
Hey Vahid

Don't expect any ETA on the mailing list...

Yes that's what I was meaning.
What you can do, is probably to inject in a signle node (1 shard, 0
replica) and see how much docs one of your single node can handle?
Then, perform the same test with 2 shards, 0 replica on the same node.

Then, add a second node and perform the same test with 2 shards, 0
replica.
Then, perform the same test with 4 shards, 0 replica.

I think, you will be able to find the right numbers for your hardware.

BTW, give a try to the wonderful Bigdesk plugin. It will help you to
find some clues about IO, Memory, ...

David.

Le 24 septembre 2012 à 13:32, Vahid Hasani <vhas...@gmail.com<javascript:>>
a écrit :

No reply?

On Mon, Sep 24, 2012 at 10:44 AM, Vahid Hasani <vhas...@gmail.com<javascript:>

wrote:

Thank you David,
With 10 nodes you mean 10 ES instance on 10 machines? if so, certainly
the result would be much more better(more hardware resources). For me the
problem is that I have no measurement of ES performance.
At first we were running the tests on the cluster but in the cluster
there are lots of factors which affect the performance(like networking...)
and performance was not acceptable and finding the bottlenecks was
difficult, so I decided to find some measurements on a single node to make
sure about the ES and our indexing approach and then run the tests on the
cluster(anyway I do agree with you that it's not a linear scale approach).

I want to find a solution to avoid decreasing the indexing performance,
so I need to know that how many docs each shards can store without
performance problem and if the max capacity reached what should I do.

Vahid.

On Mon, Sep 24, 2012 at 9:59 AM, David Pilato <da...@pilato.fr<javascript:>

wrote:

Problem is that with 100 shards on a single node (100 lucene instances
per node) will give you High IO requests. When your index is increasing
(more and more docs), Read and Write operations will cost you more.

I'm pretty sure that if you run the same test on 10 nodes (10 shards per
node - with replica=0), you will get best results.

What I want to say here is that it's really hard to make assumptions
based on what you can see on a single node. To tune ES, I recommand to do
it on the target platform.
It's not a linear scale approach.

David.

Le 24 septembre 2012 à 09:28, Vahid < vhas...@gmail.com <javascript:>> a
écrit :

Hi Jaideep, thanks for your reply.
I'm running one ES instance on a single node, first I want to make sure
about the approach to have max performance then I will apply the
configuration to the cluster.
By creating an index with only one shards soon the performance of data
indexing will be decreased, In addition, ES index creation speed get so
slow after creating about 2000 index(one shard).
During the last tests which I've run, one index with 100 shards gave me
the best performance for indexing 2.4 m records(doc size ~ 22kb and the
speed was 2550 rec/s), but for indexing more data the performance is
decreasing(final records count is 1 billion).

Thanks,
Vahid

On Monday, September 24, 2012 3:05:50 AM UTC+2, jaideep dhok wrote:

Vahid,
100 shards per index is too many. How about trying one shard per node in
the cluster?

Thanks,
Jaideep

On Mon, Sep 24, 2012 at 2:55 AM, Vahid vhas...@gmail.com wrote:

1,200,000

--
Jaideep Dhok

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--


(Vahid) #10

Thanks BillyEm for your answer and good link,

A few days ago I tried to play with the merge.factor and merge policy but I
couldn't get good result, however I need to do more with those.
But when I create a new index everything should be same as starting point?!
From my understanding merge policy and such factors affect the indexing and
search speed , do they affect the index creation(NOT indexing the data)
performance?

Best,
Vahid

On Tue, Sep 25, 2012 at 4:03 AM, BillyEm wmartinusa@gmail.com wrote:

Vahid I think you can benefit from some alternate advice. First it is
rarely necessary to start tuning on a target production machine as the
process is fairly well-defined and as documented in 'A Picture of Search'
applying queuing theory to capacity planning can illuminate service times
that extrapolate exceedingly well to larger machine environments. That
said:

I suggest you review Mike's blog posts on Merge Policy and indexing
performance at:

http://blog.mikemccandless.com/2011/02/visualizing-lucenes-segment-merges.html

the following is just a snippet from one of several related posts.-->

"In fact, from the viewpoint of the MergePolicy, this is really a game
against a sneaky opponent who randomly makes sudden changes to the index,
such as flushing new segments or applying new deletions. If the opponent is
well behaved, it'll add equal sized, large segments, which are easy to
merge well, as was the case in the above video; but that's a really easy
game, like playing tic-tack-toe against a 3 year old."

regards,
b

On Monday, September 24, 2012 1:06:01 PM UTC-4, Vahid wrote:

Thank you David,

@Don't expect any ETA on the mailing list...
Yes,Definitely you are right, I'm a little bit under pressure...

That's the exact way I've taken and these are my findings of running some
tests on a machine with these config(32g Ram,8 core and 143Mb/s hdd speed):
16g Ram for ES(Xmx=Xmn), bootstrap.mlockall: true, added jvm option:
-server, -XX:+AggressiveOpts

for adding 500,000 docs the results were:
shards: 5, 1684 docs/s
shards: 50, 2336 Doc/s
shards:100, 2558
I ran the tests with mores shards but the best for me was 100, so I
decided to set the shards no to 100 and try with other parameters.

Whats happened with more data: for 2.400.000 records the average speed
was 1600 Doc/s so I started to creating new indexes after each 500,000
records and the indexing speed increased to 2558 Doc/s.
Now the problem is that the final data is much more than these numbers
and ES after creating 20 index (let say 2000 shards) decreases so much
which in some cases creating an index takes 10 minutes.
Then I was thinking that this is the maximum speed and capacity of the ES
and I must add more hardware, so I asked here to make sure.

Best,
Vahid

On Mon, Sep 24, 2012 at 2:57 PM, David Pilato da...@pilato.fr wrote:

**
Hey Vahid

Don't expect any ETA on the mailing list...

Yes that's what I was meaning.
What you can do, is probably to inject in a signle node (1 shard, 0
replica) and see how much docs one of your single node can handle?
Then, perform the same test with 2 shards, 0 replica on the same node.

Then, add a second node and perform the same test with 2 shards, 0
replica.
Then, perform the same test with 4 shards, 0 replica.

I think, you will be able to find the right numbers for your hardware.

BTW, give a try to the wonderful Bigdesk plugin. It will help you to
find some clues about IO, Memory, ...

David.

Le 24 septembre 2012 à 13:32, Vahid Hasani vhas...@gmail.com a
écrit :

No reply?

On Mon, Sep 24, 2012 at 10:44 AM, Vahid Hasani vhas...@gmail.comwrote:

Thank you David,
With 10 nodes you mean 10 ES instance on 10 machines? if so, certainly
the result would be much more better(more hardware resources). For me the
problem is that I have no measurement of ES performance.
At first we were running the tests on the cluster but in the cluster
there are lots of factors which affect the performance(like networking...)
and performance was not acceptable and finding the bottlenecks was
difficult, so I decided to find some measurements on a single node to make
sure about the ES and our indexing approach and then run the tests on the
cluster(anyway I do agree with you that it's not a linear scale approach).

I want to find a solution to avoid decreasing the indexing performance,
so I need to know that how many docs each shards can store without
performance problem and if the max capacity reached what should I do.

Vahid.

On Mon, Sep 24, 2012 at 9:59 AM, David Pilato da...@pilato.fr wrote:

Problem is that with 100 shards on a single node (100 lucene
instances per node) will give you High IO requests. When your index is
increasing (more and more docs), Read and Write operations will cost you
more.

I'm pretty sure that if you run the same test on 10 nodes (10 shards
per node - with replica=0), you will get best results.

What I want to say here is that it's really hard to make assumptions
based on what you can see on a single node. To tune ES, I recommand to do
it on the target platform.
It's not a linear scale approach.

David.

Le 24 septembre 2012 à 09:28, Vahid < vhas...@gmail.com> a écrit :

Hi Jaideep, thanks for your reply.
I'm running one ES instance on a single node, first I want to make sure
about the approach to have max performance then I will apply the
configuration to the cluster.
By creating an index with only one shards soon the performance of data
indexing will be decreased, In addition, ES index creation speed get so
slow after creating about 2000 index(one shard).
During the last tests which I've run, one index with 100 shards gave me
the best performance for indexing 2.4 m records(doc size ~ 22kb and the
speed was 2550 rec/s), but for indexing more data the performance is
decreasing(final records count is 1 billion).

Thanks,
Vahid

On Monday, September 24, 2012 3:05:50 AM UTC+2, jaideep dhok wrote:

Vahid,
100 shards per index is too many. How about trying one shard per node in
the cluster?

Thanks,
Jaideep

On Mon, Sep 24, 2012 at 2:55 AM, Vahid vhas...@gmail.com wrote:

1,200,000

--
Jaideep Dhok

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--

--


(Jörg Prante) #11

Hi Vahid,

index creation for itself is very fast. What you see is the overhead with
100 shards. Just use a small number of shards (default is 5 which is ok for
one node). You can speed up indexing by switching off real-time indexing.

Best regards

Jörg

On Tuesday, September 25, 2012 4:06:50 PM UTC+2, Vahid wrote:

Thanks BillyEm for your answer and good link,

A few days ago I tried to play with the merge.factor and merge policy but
I couldn't get good result, however I need to do more with those.
But when I create a new index everything should be same as starting
point?!
From my understanding merge policy and such factors affect the indexing
and search speed , do they affect the index creation(NOT indexing the data)
performance?

Best,
Vahid

On Tue, Sep 25, 2012 at 4:03 AM, BillyEm <wmart...@gmail.com <javascript:>

wrote:

Vahid I think you can benefit from some alternate advice. First it is
rarely necessary to start tuning on a target production machine as the
process is fairly well-defined and as documented in 'A Picture of Search'
applying queuing theory to capacity planning can illuminate service times
that extrapolate exceedingly well to larger machine environments. That
said:

I suggest you review Mike's blog posts on Merge Policy and indexing
performance at:

http://blog.mikemccandless.com/2011/02/visualizing-lucenes-segment-merges.html

the following is just a snippet from one of several related posts.-->

"In fact, from the viewpoint of the MergePolicy, this is really a game
against a sneaky opponent who randomly makes sudden changes to the index,
such as flushing new segments or applying new deletions. If the opponent is
well behaved, it'll add equal sized, large segments, which are easy to
merge well, as was the case in the above video; but that's a really easy
game, like playing tic-tack-toe against a 3 year old."

regards,
b

On Monday, September 24, 2012 1:06:01 PM UTC-4, Vahid wrote:

Thank you David,

@Don't expect any ETA on the mailing list...
Yes,Definitely you are right, I'm a little bit under pressure...

That's the exact way I've taken and these are my findings of running
some tests on a machine with these config(32g Ram,8 core and 143Mb/s hdd
speed):
16g Ram for ES(Xmx=Xmn), bootstrap.mlockall: true, added jvm option:
-server, -XX:+AggressiveOpts

for adding 500,000 docs the results were:
shards: 5, 1684 docs/s
shards: 50, 2336 Doc/s
shards:100, 2558
I ran the tests with mores shards but the best for me was 100, so I
decided to set the shards no to 100 and try with other parameters.

Whats happened with more data: for 2.400.000 records the average speed
was 1600 Doc/s so I started to creating new indexes after each 500,000
records and the indexing speed increased to 2558 Doc/s.
Now the problem is that the final data is much more than these numbers
and ES after creating 20 index (let say 2000 shards) decreases so much
which in some cases creating an index takes 10 minutes.
Then I was thinking that this is the maximum speed and capacity of the
ES and I must add more hardware, so I asked here to make sure.

Best,
Vahid

On Mon, Sep 24, 2012 at 2:57 PM, David Pilato da...@pilato.fr wrote:

**
Hey Vahid

Don't expect any ETA on the mailing list...

Yes that's what I was meaning.
What you can do, is probably to inject in a signle node (1 shard, 0
replica) and see how much docs one of your single node can handle?
Then, perform the same test with 2 shards, 0 replica on the same node.

Then, add a second node and perform the same test with 2 shards, 0
replica.
Then, perform the same test with 4 shards, 0 replica.

I think, you will be able to find the right numbers for your hardware.

BTW, give a try to the wonderful Bigdesk plugin. It will help you to
find some clues about IO, Memory, ...

David.

Le 24 septembre 2012 à 13:32, Vahid Hasani vhas...@gmail.com a
écrit :

No reply?

On Mon, Sep 24, 2012 at 10:44 AM, Vahid Hasani vhas...@gmail.comwrote:

Thank you David,
With 10 nodes you mean 10 ES instance on 10 machines? if so, certainly
the result would be much more better(more hardware resources). For me the
problem is that I have no measurement of ES performance.
At first we were running the tests on the cluster but in the cluster
there are lots of factors which affect the performance(like networking...)
and performance was not acceptable and finding the bottlenecks was
difficult, so I decided to find some measurements on a single node to make
sure about the ES and our indexing approach and then run the tests on the
cluster(anyway I do agree with you that it's not a linear scale approach).

I want to find a solution to avoid decreasing the indexing performance,
so I need to know that how many docs each shards can store without
performance problem and if the max capacity reached what should I do.

Vahid.

On Mon, Sep 24, 2012 at 9:59 AM, David Pilato da...@pilato.frwrote:

Problem is that with 100 shards on a single node (100 lucene
instances per node) will give you High IO requests. When your index is
increasing (more and more docs), Read and Write operations will cost you
more.

I'm pretty sure that if you run the same test on 10 nodes (10 shards
per node - with replica=0), you will get best results.

What I want to say here is that it's really hard to make assumptions
based on what you can see on a single node. To tune ES, I recommand to do
it on the target platform.
It's not a linear scale approach.

David.

Le 24 septembre 2012 à 09:28, Vahid < vhas...@gmail.com> a écrit :

Hi Jaideep, thanks for your reply.
I'm running one ES instance on a single node, first I want to make sure
about the approach to have max performance then I will apply the
configuration to the cluster.
By creating an index with only one shards soon the performance of data
indexing will be decreased, In addition, ES index creation speed get so
slow after creating about 2000 index(one shard).
During the last tests which I've run, one index with 100 shards gave me
the best performance for indexing 2.4 m records(doc size ~ 22kb and the
speed was 2550 rec/s), but for indexing more data the performance is
decreasing(final records count is 1 billion).

Thanks,
Vahid

On Monday, September 24, 2012 3:05:50 AM UTC+2, jaideep dhok wrote:

Vahid,
100 shards per index is too many. How about trying one shard per node
in the cluster?

Thanks,
Jaideep

On Mon, Sep 24, 2012 at 2:55 AM, Vahid vhas...@gmail.com wrote:

1,200,000

--
Jaideep Dhok

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--

--


(anujsharma) #12

Hi ,

I am creating index using ES with 8GB RAM ,1Node with 4 Shards. I am using bulk api to index my data.
So i send 200 docs in one batch , my total batch is 2000.But i got performance issues ,I got java heap space exception when 20k docs get indexed.

Any one suggest me how i can solve this issue.

Thanks in advance


(Vahid) #13

Hi Jörg,
Thank you for your reply,

From the tests which I've run on a single node for indexing 2,400,000(50g)
I got the best performance with 100 shards and 4 index.
By switching off real time indexing, if you mean disabling refresh_interval
I've already done it(I'm running the tests with -1 as refresh_interval)
however it doesn't affect performance considerably on a single node.

Best,
Vahid

On Tue, Sep 25, 2012 at 10:58 PM, Jörg Prante joergprante@gmail.com wrote:

Hi Vahid,

index creation for itself is very fast. What you see is the overhead with
100 shards. Just use a small number of shards (default is 5 which is ok for
one node). You can speed up indexing by switching off real-time indexing.

Best regards

Jörg

On Tuesday, September 25, 2012 4:06:50 PM UTC+2, Vahid wrote:

Thanks BillyEm for your answer and good link,

A few days ago I tried to play with the merge.factor and merge policy but
I couldn't get good result, however I need to do more with those.
But when I create a new index everything should be same as starting
point?!
From my understanding merge policy and such factors affect the indexing
and search speed , do they affect the index creation(NOT indexing the data)
performance?

Best,
Vahid

On Tue, Sep 25, 2012 at 4:03 AM, BillyEm wmart...@gmail.com wrote:

Vahid I think you can benefit from some alternate advice. First it is
rarely necessary to start tuning on a target production machine as the
process is fairly well-defined and as documented in 'A Picture of Search'
applying queuing theory to capacity planning can illuminate service times
that extrapolate exceedingly well to larger machine environments. That
said:

I suggest you review Mike's blog posts on Merge Policy and indexing
performance at:

http://blog.mikemccandless.com/2011/02/visualizing-
lucenes-segment-merges.htmlhttp://blog.mikemccandless.com/2011/02/visualizing-lucenes-segment-merges.html

the following is just a snippet from one of several related posts.-->

"In fact, from the viewpoint of the MergePolicy, this is really a game
against a sneaky opponent who randomly makes sudden changes to the index,
such as flushing new segments or applying new deletions. If the opponent is
well behaved, it'll add equal sized, large segments, which are easy to
merge well, as was the case in the above video; but that's a really easy
game, like playing tic-tack-toe against a 3 year old."

regards,
b

On Monday, September 24, 2012 1:06:01 PM UTC-4, Vahid wrote:

Thank you David,

@Don't expect any ETA on the mailing list...
Yes,Definitely you are right, I'm a little bit under pressure...

That's the exact way I've taken and these are my findings of running
some tests on a machine with these config(32g Ram,8 core and 143Mb/s hdd
speed):
16g Ram for ES(Xmx=Xmn), bootstrap.mlockall: true, added jvm option:
-server, -XX:+AggressiveOpts

for adding 500,000 docs the results were:
shards: 5, 1684 docs/s
shards: 50, 2336 Doc/s
shards:100, 2558
I ran the tests with mores shards but the best for me was 100, so I
decided to set the shards no to 100 and try with other parameters.

Whats happened with more data: for 2.400.000 records the average speed
was 1600 Doc/s so I started to creating new indexes after each 500,000
records and the indexing speed increased to 2558 Doc/s.
Now the problem is that the final data is much more than these numbers
and ES after creating 20 index (let say 2000 shards) decreases so much
which in some cases creating an index takes 10 minutes.
Then I was thinking that this is the maximum speed and capacity of the
ES and I must add more hardware, so I asked here to make sure.

Best,
Vahid

On Mon, Sep 24, 2012 at 2:57 PM, David Pilato da...@pilato.fr wrote:

**
Hey Vahid

Don't expect any ETA on the mailing list...

Yes that's what I was meaning.
What you can do, is probably to inject in a signle node (1 shard, 0
replica) and see how much docs one of your single node can handle?
Then, perform the same test with 2 shards, 0 replica on the same
node.

Then, add a second node and perform the same test with 2 shards, 0
replica.
Then, perform the same test with 4 shards, 0 replica.

I think, you will be able to find the right numbers for your
hardware.

BTW, give a try to the wonderful Bigdesk plugin. It will help you to
find some clues about IO, Memory, ...

David.

Le 24 septembre 2012 à 13:32, Vahid Hasani vhas...@gmail.com a
écrit :

No reply?

On Mon, Sep 24, 2012 at 10:44 AM, Vahid Hasani vhas...@gmail.comwrote:

Thank you David,
With 10 nodes you mean 10 ES instance on 10 machines? if so, certainly
the result would be much more better(more hardware resources). For me the
problem is that I have no measurement of ES performance.
At first we were running the tests on the cluster but in the cluster
there are lots of factors which affect the performance(like networking...)
and performance was not acceptable and finding the bottlenecks was
difficult, so I decided to find some measurements on a single node to make
sure about the ES and our indexing approach and then run the tests on the
cluster(anyway I do agree with you that it's not a linear scale approach).

I want to find a solution to avoid decreasing the indexing
performance, so I need to know that how many docs each shards can store
without performance problem and if the max capacity reached what should I
do.

Vahid.

On Mon, Sep 24, 2012 at 9:59 AM, David Pilato da...@pilato.frwrote:

Problem is that with 100 shards on a single node (100 lucene
instances per node) will give you High IO requests. When your index is
increasing (more and more docs), Read and Write operations will cost you
more.

I'm pretty sure that if you run the same test on 10 nodes (10 shards
per node - with replica=0), you will get best results.

What I want to say here is that it's really hard to make assumptions
based on what you can see on a single node. To tune ES, I recommand to do
it on the target platform.
It's not a linear scale approach.

David.

Le 24 septembre 2012 à 09:28, Vahid < vhas...@gmail.com> a écrit :

Hi Jaideep, thanks for your reply.
I'm running one ES instance on a single node, first I want to make
sure about the approach to have max performance then I will apply the
configuration to the cluster.
By creating an index with only one shards soon the performance of data
indexing will be decreased, In addition, ES index creation speed get so
slow after creating about 2000 index(one shard).
During the last tests which I've run, one index with 100 shards gave
me the best performance for indexing 2.4 m records(doc size ~ 22kb and the
speed was 2550 rec/s), but for indexing more data the performance is
decreasing(final records count is 1 billion).

Thanks,
Vahid

On Monday, September 24, 2012 3:05:50 AM UTC+2, jaideep dhok wrote:

Vahid,
100 shards per index is too many. How about trying one shard per node
in the cluster?

Thanks,
Jaideep

On Mon, Sep 24, 2012 at 2:55 AM, Vahid vhas...@gmail.com wrote:

1,200,000

--
Jaideep Dhok

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

--

--

--


(system) #14