Performance issue while indexing lot of documents

Hi,
I'm new to ElasticSearch and I tried using Java API with bulk indexing.
I wrote simple Java program that tries to enter 1000000 documents.
However at the beginning I'm able to enter 1000 documents at less than 0.5
second while after some thousands of documents it takes more than 2 seconds
and keep increasing.
See output below and code as well.
I assume I'm doing something wrong. Please help me out. Thank you in
advance.

Regards,
Moshe

Output:
processed 1000 records from 5000 until 6000 at 781
processed 1000 records from 6000 until 7000 at 451
processed 1000 records from 7000 until 8000 at 683
processed 1000 records from 8000 until 9000 at 548
processed 1000 records from 9000 until 10000 at 303
processed 1000 records from 10000 until 11000 at 468
processed 1000 records from 11000 until 12000 at 316
processed 1000 records from 12000 until 13000 at 446
processed 1000 records from 13000 until 14000 at 342
processed 1000 records from 14000 until 15000 at 243
processed 1000 records from 15000 until 16000 at 252
processed 1000 records from 16000 until 17000 at 362
processed 1000 records from 17000 until 18000 at 302
processed 1000 records from 18000 until 19000 at 485
processed 1000 records from 19000 until 20000 at 402
processed 1000 records from 20000 until 21000 at 334
processed 1000 records from 21000 until 22000 at 429
processed 1000 records from 22000 until 23000 at 522
processed 1000 records from 23000 until 24000 at 434
processed 1000 records from 24000 until 25000 at 543
processed 1000 records from 25000 until 26000 at 476
processed 1000 records from 26000 until 27000 at 784
processed 1000 records from 27000 until 28000 at 797
processed 1000 records from 28000 until 29000 at 808
processed 1000 records from 29000 until 30000 at 670
processed 1000 records from 30000 until 31000 at 693
processed 1000 records from 31000 until 32000 at 710
processed 1000 records from 32000 until 33000 at 792
processed 1000 records from 33000 until 34000 at 582
processed 1000 records from 34000 until 35000 at 745
processed 1000 records from 35000 until 36000 at 762
processed 1000 records from 36000 until 37000 at 864
processed 1000 records from 37000 until 38000 at 880
processed 1000 records from 38000 until 39000 at 822
processed 1000 records from 39000 until 40000 at 1293
processed 1000 records from 40000 until 41000 at 1248
processed 1000 records from 41000 until 42000 at 1355
processed 1000 records from 42000 until 43000 at 999
processed 1000 records from 43000 until 44000 at 815
processed 1000 records from 44000 until 45000 at 934
processed 1000 records from 45000 until 46000 at 1213
processed 1000 records from 46000 until 47000 at 1085
processed 1000 records from 47000 until 48000 at 1136
processed 1000 records from 48000 until 49000 at 1352
processed 1000 records from 49000 until 50000 at 1417
processed 1000 records from 50000 until 51000 at 1423
processed 1000 records from 51000 until 52000 at 1275
processed 1000 records from 52000 until 53000 at 1215
processed 1000 records from 53000 until 54000 at 1324
processed 1000 records from 54000 until 55000 at 1132
processed 1000 records from 55000 until 56000 at 1268
processed 1000 records from 56000 until 57000 at 1055
processed 1000 records from 57000 until 58000 at 1618
processed 1000 records from 58000 until 59000 at 1608
processed 1000 records from 59000 until 60000 at 1600
processed 1000 records from 60000 until 61000 at 1673
processed 1000 records from 61000 until 62000 at 1340
processed 1000 records from 62000 until 63000 at 1815
processed 1000 records from 63000 until 64000 at 1708
processed 1000 records from 64000 until 65000 at 1543
processed 1000 records from 65000 until 66000 at 1674
processed 1000 records from 66000 until 67000 at 2005
processed 1000 records from 67000 until 68000 at 1889
processed 1000 records from 68000 until 69000 at 1570
processed 1000 records from 69000 until 70000 at 1527
processed 1000 records from 70000 until 71000 at 1603
processed 1000 records from 71000 until 72000 at 1748
processed 1000 records from 72000 until 73000 at 1745
processed 1000 records from 73000 until 74000 at 1580
processed 1000 records from 74000 until 75000 at 1588
processed 1000 records from 75000 until 76000 at 2291
processed 1000 records from 76000 until 77000 at 2903
processed 1000 records from 77000 until 78000 at 1625
processed 1000 records from 78000 until 79000 at 2087

Source code:
Node node = NodeBuilder.nodeBuilder().node();
Client client = node.client();
BulkRequestBuilder bulkRequest = client.prepareBulk();
int numOfDocs = 1000000;
long startTime = System.currentTimeMillis();
System.out.println("Going to add " + numOfDocs);
// either use client#prepare, or use Requests# to directly build
index/delete requests
long internalStartTime = System.currentTimeMillis();
for (int i = 0; i < numOfDocs; i++)
{
IndexRequestBuilder index =client.prepareIndex("twitter", "tweet", "id221"
+i);
index.setSource(jsonBuilder()
.startObject()
.field("user", "kimchy" +i)
.field("postDate", new Date())
.field("message", "trying out Elasticsearch"+i)
.endObject());
bulkRequest.add(index);
if (i % 1000 == 0 )
{
BulkResponse bulkResponse = bulkRequest.execute().actionGet();
if (bulkResponse.hasFailures())
{
BulkItemResponse item[] = bulkResponse.getItems();
for (int j = 0; j< item.length; j++)
{
if (item[i].isFailed())
{
System.out.println ("Error " + item[i].getFailureMessage());
}
}
// bulkRequest = client.prepareBulk();

}
System.out.println ("processed 1000 records from " + (i-1000) + " until "

  • i + " at " + (System.currentTimeMillis() - internalStartTime));
    internalStartTime = System.currentTimeMillis();
    }
    }

BulkResponse bulkResponse = bulkRequest.execute().actionGet();
if (bulkResponse.hasFailures())
{
BulkItemResponse item[] = bulkResponse.getItems();
for (int i = 0; i< item.length; i++)
{
if (item[i].isFailed())
{
System.out.println ("Error " + item[i].getFailureMessage());
}
}

}
System.out.println ("Finished entereing " + numOfDocs + " in " +
(System.currentTimeMillis() - startTime));

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/0678d5a7-2034-4e99-bfe7-751d73af472a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

On Thu, Nov 6, 2014 at 11:09 AM, Moshe Recanati re.moshe@gmail.com wrote:

// bulkRequest = client.prepareBulk();

Please fix your code to clearly only send 1000 in a bulk request.
Looks like you are just increasnig the size of the bulk request now and
executing it over and over

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CABY_-Z7zNQJAqMdpry2hvuyK80UY-XPFHRw1YR23SKcrrZ%2B6%3DQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Hi Thomas,
Thank you for the hint :slight_smile:
I changed it however now I'm getting the following Error although I'm not
using threads

Thank you
Regards,
Moshe

Going to add 1000000
processed 1000 records from -1000 until 0 at 26834
Error VersionConflictEngineException[[twitter][1] [tweet][momo110]: version
conflict, current [1], provided [1]]
processed 1000 records from 0 until 1000 at 9285
processed 1000 records from 1000 until 2000 at 91
Error VersionConflictEngineException[[twitter][2] [tweet][momo111001]:
version conflict, current [1], provided [1]]
Error VersionConflictEngineException[[twitter][1] [tweet][momo111002]:
version conflict, current [1], provided [1]]
Error VersionConflictEngineException[[twitter][0] [tweet][momo111003]:
version conflict, current [1], provided [1]]
Error VersionConflictEngineException[[twitter][4] [tweet][momo111004]:
version conflict, current [1], provided [1]]

On Thu, Nov 6, 2014 at 12:12 PM, Thomas Matthijs lists@selckin.be wrote:

On Thu, Nov 6, 2014 at 11:09 AM, Moshe Recanati re.moshe@gmail.com
wrote:

// bulkRequest = client.prepareBulk();

Please fix your code to clearly only send 1000 in a bulk request.
Looks like you are just increasnig the size of the bulk request now and
executing it over and over

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CABY_-Z7zNQJAqMdpry2hvuyK80UY-XPFHRw1YR23SKcrrZ%2B6%3DQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CABY_-Z7zNQJAqMdpry2hvuyK80UY-XPFHRw1YR23SKcrrZ%2B6%3DQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CA%2BhKCHNfY8ngJ9xn%2BG7uCK_pt3QTQeLdxHBzC_459yoOTJvUVw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

hi Thomas,
I fixed the code per your suggestion and initiated prepared bulk each 1000
documents (code below).
However add document time is still increasing.

Please let me know what's wrong. Thank you in advance.

Moshe

Output:
Going to add 1000000
processed 1000 records from -1000 until 0 at 704
processed 1000 records from 0 until 1000 at 3068
processed 1000 records from 1000 until 2000 at 1030
processed 1000 records from 2000 until 3000 at 1654
processed 1000 records from 3000 until 4000 at 1798
processed 1000 records from 4000 until 5000 at 1808
processed 1000 records from 5000 until 6000 at 580
processed 1000 records from 6000 until 7000 at 354
processed 1000 records from 7000 until 8000 at 731
processed 1000 records from 8000 until 9000 at 496
processed 1000 records from 9000 until 10000 at 822
processed 1000 records from 10000 until 11000 at 564
processed 1000 records from 11000 until 12000 at 588
processed 1000 records from 12000 until 13000 at 690
processed 1000 records from 13000 until 14000 at 774
processed 1000 records from 14000 until 15000 at 1528
processed 1000 records from 15000 until 16000 at 1028
processed 1000 records from 16000 until 17000 at 966
processed 1000 records from 17000 until 18000 at 1397
processed 1000 records from 18000 until 19000 at 962
processed 1000 records from 19000 until 20000 at 3573
processed 1000 records from 20000 until 21000 at 1332
processed 1000 records from 21000 until 22000 at 1282
processed 1000 records from 22000 until 23000 at 1746
processed 1000 records from 23000 until 24000 at 1411
processed 1000 records from 24000 until 25000 at 1742
processed 1000 records from 25000 until 26000 at 2540
processed 1000 records from 26000 until 27000 at 2217
processed 1000 records from 27000 until 28000 at 1203
processed 1000 records from 28000 until 29000 at 1714
processed 1000 records from 29000 until 30000 at 1595
processed 1000 records from 30000 until 31000 at 1809
processed 1000 records from 31000 until 32000 at 2305
processed 1000 records from 32000 until 33000 at 1604
processed 1000 records from 33000 until 34000 at 2208
processed 1000 records from 34000 until 35000 at 1989
processed 1000 records from 35000 until 36000 at 1939
processed 1000 records from 36000 until 37000 at 1826
processed 1000 records from 37000 until 38000 at 1716
processed 1000 records from 38000 until 39000 at 1957
processed 1000 records from 39000 until 40000 at 1665
processed 1000 records from 40000 until 41000 at 1743
processed 1000 records from 41000 until 42000 at 2166
processed 1000 records from 42000 until 43000 at 2450
processed 1000 records from 43000 until 44000 at 3342
processed 1000 records from 44000 until 45000 at 2632
processed 1000 records from 45000 until 46000 at 2795
processed 1000 records from 46000 until 47000 at 3129
processed 1000 records from 47000 until 48000 at 3290
processed 1000 records from 48000 until 49000 at 3973
processed 1000 records from 49000 until 50000 at 3297
processed 1000 records from 50000 until 51000 at 3500
processed 1000 records from 51000 until 52000 at 4328
processed 1000 records from 52000 until 53000 at 3913
processed 1000 records from 53000 until 54000 at 3636
processed 1000 records from 54000 until 55000 at 3971
processed 1000 records from 55000 until 56000 at 5851
processed 1000 records from 56000 until 57000 at 4150
processed 1000 records from 57000 until 58000 at 4557
processed 1000 records from 58000 until 59000 at 4534
processed 1000 records from 59000 until 60000 at 4918
processed 1000 records from 60000 until 61000 at 3839
processed 1000 records from 61000 until 62000 at 4297
processed 1000 records from 62000 until 63000 at 4516
processed 1000 records from 63000 until 64000 at 4782
processed 1000 records from 64000 until 65000 at 4581

Code:
Node node = NodeBuilder.nodeBuilder().node();
Client client = node.client();
try
{
CreateIndexRequestBuilder createIndexRequestBuilder =
client.admin().indices().prepareCreate("twitter2");
createIndexRequestBuilder.execute().actionGet();
}
catch (Exception e)
{
e.printStackTrace();
}
BulkRequestBuilder bulkRequest = client.prepareBulk();
int numOfDocs = 1000000;
long startTime = System.currentTimeMillis();
System.out.println("Going to add " + numOfDocs);
// either use client#prepare, or use Requests# to directly build
index/delete requests
long internalStartTime = System.currentTimeMillis();
for (int i = 0; i < numOfDocs; i++)
{
IndexRequestBuilder index =client.prepareIndex("twitter2", "tweet", "m"+i);
index.setSource(jsonBuilder()
.startObject()
.field("user", "kimchy" +i)
.field("postDate", new Date())
.field("message", "trying out Elasticsearch"+i)
.endObject());
bulkRequest.add(index);
if (i % 1000 == 0 )
{
BulkResponse bulkResponse = bulkRequest.execute().actionGet();
if (bulkResponse.hasFailures())
{
BulkItemResponse item = bulkResponse.getItems();
for (int j = 0; j< item.length; j++)
{
if (item[j].isFailed())
{
System.out.println ("Error " + item[j].getFailureMessage());
}
}
bulkRequest = client.prepareBulk();

}
System.out.println ("processed 1000 records from " + (i-1000) + " until "

  • i + " at " + (System.currentTimeMillis() - internalStartTime));
    internalStartTime = System.currentTimeMillis();
    }
    }

BulkResponse bulkResponse = bulkRequest.execute().actionGet();
if (bulkResponse.hasFailures())
{
BulkItemResponse item = bulkResponse.getItems();
for (int i = 0; i< item.length; i++)
{
if (item[i].isFailed())
{
System.out.println ("Error " + item[i].getFailureMessage());
}
}

}

On Thursday, November 6, 2014 12:12:58 PM UTC+2, selckin wrote:

On Thu, Nov 6, 2014 at 11:09 AM, Moshe Recanati <re.m...@gmail.com
<javascript:>> wrote:

// bulkRequest = client.prepareBulk();

Please fix your code to clearly only send 1000 in a bulk request.
Looks like you are just increasnig the size of the bulk request now and
executing it over and over

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/ddff5f78-ff08-4e2e-885d-602b11d85a89%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Not answering to your question but you should look at BulkProcessor class.

It would simplify a lot your code.

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 6 nov. 2014 à 15:01, Moshe Recanati re.moshe@gmail.com a écrit :

hi Thomas,
I fixed the code per your suggestion and initiated prepared bulk each 1000 documents (code below).
However add document time is still increasing.

Please let me know what's wrong. Thank you in advance.

Moshe

Output:
Going to add 1000000
processed 1000 records from -1000 until 0 at 704
processed 1000 records from 0 until 1000 at 3068
processed 1000 records from 1000 until 2000 at 1030
processed 1000 records from 2000 until 3000 at 1654
processed 1000 records from 3000 until 4000 at 1798
processed 1000 records from 4000 until 5000 at 1808
processed 1000 records from 5000 until 6000 at 580
processed 1000 records from 6000 until 7000 at 354
processed 1000 records from 7000 until 8000 at 731
processed 1000 records from 8000 until 9000 at 496
processed 1000 records from 9000 until 10000 at 822
processed 1000 records from 10000 until 11000 at 564
processed 1000 records from 11000 until 12000 at 588
processed 1000 records from 12000 until 13000 at 690
processed 1000 records from 13000 until 14000 at 774
processed 1000 records from 14000 until 15000 at 1528
processed 1000 records from 15000 until 16000 at 1028
processed 1000 records from 16000 until 17000 at 966
processed 1000 records from 17000 until 18000 at 1397
processed 1000 records from 18000 until 19000 at 962
processed 1000 records from 19000 until 20000 at 3573
processed 1000 records from 20000 until 21000 at 1332
processed 1000 records from 21000 until 22000 at 1282
processed 1000 records from 22000 until 23000 at 1746
processed 1000 records from 23000 until 24000 at 1411
processed 1000 records from 24000 until 25000 at 1742
processed 1000 records from 25000 until 26000 at 2540
processed 1000 records from 26000 until 27000 at 2217
processed 1000 records from 27000 until 28000 at 1203
processed 1000 records from 28000 until 29000 at 1714
processed 1000 records from 29000 until 30000 at 1595
processed 1000 records from 30000 until 31000 at 1809
processed 1000 records from 31000 until 32000 at 2305
processed 1000 records from 32000 until 33000 at 1604
processed 1000 records from 33000 until 34000 at 2208
processed 1000 records from 34000 until 35000 at 1989
processed 1000 records from 35000 until 36000 at 1939
processed 1000 records from 36000 until 37000 at 1826
processed 1000 records from 37000 until 38000 at 1716
processed 1000 records from 38000 until 39000 at 1957
processed 1000 records from 39000 until 40000 at 1665
processed 1000 records from 40000 until 41000 at 1743
processed 1000 records from 41000 until 42000 at 2166
processed 1000 records from 42000 until 43000 at 2450
processed 1000 records from 43000 until 44000 at 3342
processed 1000 records from 44000 until 45000 at 2632
processed 1000 records from 45000 until 46000 at 2795
processed 1000 records from 46000 until 47000 at 3129
processed 1000 records from 47000 until 48000 at 3290
processed 1000 records from 48000 until 49000 at 3973
processed 1000 records from 49000 until 50000 at 3297
processed 1000 records from 50000 until 51000 at 3500
processed 1000 records from 51000 until 52000 at 4328
processed 1000 records from 52000 until 53000 at 3913
processed 1000 records from 53000 until 54000 at 3636
processed 1000 records from 54000 until 55000 at 3971
processed 1000 records from 55000 until 56000 at 5851
processed 1000 records from 56000 until 57000 at 4150
processed 1000 records from 57000 until 58000 at 4557
processed 1000 records from 58000 until 59000 at 4534
processed 1000 records from 59000 until 60000 at 4918
processed 1000 records from 60000 until 61000 at 3839
processed 1000 records from 61000 until 62000 at 4297
processed 1000 records from 62000 until 63000 at 4516
processed 1000 records from 63000 until 64000 at 4782
processed 1000 records from 64000 until 65000 at 4581

Code:
Node node = NodeBuilder.nodeBuilder().node();
Client client = node.client();

  try
  {

CreateIndexRequestBuilder createIndexRequestBuilder = client.admin().indices().prepareCreate("twitter2");
createIndexRequestBuilder.execute().actionGet();
}
catch (Exception e)
{
e.printStackTrace();
}
BulkRequestBuilder bulkRequest = client.prepareBulk();
int numOfDocs = 1000000;
long startTime = System.currentTimeMillis();
System.out.println("Going to add " + numOfDocs);
// either use client#prepare, or use Requests# to directly build index/delete requests
long internalStartTime = System.currentTimeMillis();

  for (int i = 0; i < numOfDocs; i++)
  {
  	IndexRequestBuilder index =client.prepareIndex("twitter2", "tweet", "m"+i);
  	index.setSource(jsonBuilder()
            .startObject()
            .field("user", "kimchy" +i)
            .field("postDate", new Date())
            .field("message", "trying out Elasticsearch"+i)
        .endObject());
  	
  	
  	bulkRequest.add(index);
  	if (i % 1000 == 0 )
  	{
  		BulkResponse bulkResponse = bulkRequest.execute().actionGet();
  		if (bulkResponse.hasFailures()) 
  		{
  			BulkItemResponse item[] = bulkResponse.getItems();
  			for (int j = 0; j< item.length; j++)
  			{
  				if (item[j].isFailed())
  				{
  					System.out.println ("Error " + item[j].getFailureMessage());	
  				}
  			}
  			
  			bulkRequest = client.prepareBulk();
  		 
  		}
  		
  		System.out.println ("processed 1000 records from " + (i-1000) + " until  " + i + " at " + (System.currentTimeMillis() - internalStartTime));
  		internalStartTime = System.currentTimeMillis();
  	}
  }
  
          
  BulkResponse bulkResponse = bulkRequest.execute().actionGet();
  if (bulkResponse.hasFailures()) 
  {
  	BulkItemResponse item[] = bulkResponse.getItems();
  	for (int i = 0; i< item.length; i++)
  	{
  		if (item[i].isFailed())
  		{
  			System.out.println ("Error " + item[i].getFailureMessage());	
  		}
  	}
  	
  	
   
  }

On Thursday, November 6, 2014 12:12:58 PM UTC+2, selckin wrote:

On Thu, Nov 6, 2014 at 11:09 AM, Moshe Recanati re.m...@gmail.com wrote:
// bulkRequest = client.prepareBulk();

Please fix your code to clearly only send 1000 in a bulk request.
Looks like you are just increasnig the size of the bulk request now and executing it over and over

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/ddff5f78-ff08-4e2e-885d-602b11d85a89%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/7E3507E4-C850-45C0-8B43-D94CAD800034%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.

It may worth looking at 2 things:

  1. Using the latest Elasticsearch version (1.4). Many work went on
    optimizing those type of scenarios on the server side.

  2. Disabling refresh / flush - I assume this is an ETL process and as such
    this could greatly help.

--

Itamar Syn-Hershko
http://code972.com | @synhershko https://twitter.com/synhershko
Freelance Developer & Consultant
Author of RavenDB in Action http://manning.com/synhershko/

On Thu, Nov 6, 2014 at 4:01 PM, Moshe Recanati re.moshe@gmail.com wrote:

hi Thomas,
I fixed the code per your suggestion and initiated prepared bulk each 1000
documents (code below).
However add document time is still increasing.

Please let me know what's wrong. Thank you in advance.

Moshe

Output:
Going to add 1000000
processed 1000 records from -1000 until 0 at 704
processed 1000 records from 0 until 1000 at 3068
processed 1000 records from 1000 until 2000 at 1030
processed 1000 records from 2000 until 3000 at 1654
processed 1000 records from 3000 until 4000 at 1798
processed 1000 records from 4000 until 5000 at 1808
processed 1000 records from 5000 until 6000 at 580
processed 1000 records from 6000 until 7000 at 354
processed 1000 records from 7000 until 8000 at 731
processed 1000 records from 8000 until 9000 at 496
processed 1000 records from 9000 until 10000 at 822
processed 1000 records from 10000 until 11000 at 564
processed 1000 records from 11000 until 12000 at 588
processed 1000 records from 12000 until 13000 at 690
processed 1000 records from 13000 until 14000 at 774
processed 1000 records from 14000 until 15000 at 1528
processed 1000 records from 15000 until 16000 at 1028
processed 1000 records from 16000 until 17000 at 966
processed 1000 records from 17000 until 18000 at 1397
processed 1000 records from 18000 until 19000 at 962
processed 1000 records from 19000 until 20000 at 3573
processed 1000 records from 20000 until 21000 at 1332
processed 1000 records from 21000 until 22000 at 1282
processed 1000 records from 22000 until 23000 at 1746
processed 1000 records from 23000 until 24000 at 1411
processed 1000 records from 24000 until 25000 at 1742
processed 1000 records from 25000 until 26000 at 2540
processed 1000 records from 26000 until 27000 at 2217
processed 1000 records from 27000 until 28000 at 1203
processed 1000 records from 28000 until 29000 at 1714
processed 1000 records from 29000 until 30000 at 1595
processed 1000 records from 30000 until 31000 at 1809
processed 1000 records from 31000 until 32000 at 2305
processed 1000 records from 32000 until 33000 at 1604
processed 1000 records from 33000 until 34000 at 2208
processed 1000 records from 34000 until 35000 at 1989
processed 1000 records from 35000 until 36000 at 1939
processed 1000 records from 36000 until 37000 at 1826
processed 1000 records from 37000 until 38000 at 1716
processed 1000 records from 38000 until 39000 at 1957
processed 1000 records from 39000 until 40000 at 1665
processed 1000 records from 40000 until 41000 at 1743
processed 1000 records from 41000 until 42000 at 2166
processed 1000 records from 42000 until 43000 at 2450
processed 1000 records from 43000 until 44000 at 3342
processed 1000 records from 44000 until 45000 at 2632
processed 1000 records from 45000 until 46000 at 2795
processed 1000 records from 46000 until 47000 at 3129
processed 1000 records from 47000 until 48000 at 3290
processed 1000 records from 48000 until 49000 at 3973
processed 1000 records from 49000 until 50000 at 3297
processed 1000 records from 50000 until 51000 at 3500
processed 1000 records from 51000 until 52000 at 4328
processed 1000 records from 52000 until 53000 at 3913
processed 1000 records from 53000 until 54000 at 3636
processed 1000 records from 54000 until 55000 at 3971
processed 1000 records from 55000 until 56000 at 5851
processed 1000 records from 56000 until 57000 at 4150
processed 1000 records from 57000 until 58000 at 4557
processed 1000 records from 58000 until 59000 at 4534
processed 1000 records from 59000 until 60000 at 4918
processed 1000 records from 60000 until 61000 at 3839
processed 1000 records from 61000 until 62000 at 4297
processed 1000 records from 62000 until 63000 at 4516
processed 1000 records from 63000 until 64000 at 4782
processed 1000 records from 64000 until 65000 at 4581

Code:
Node node = NodeBuilder.nodeBuilder().node();
Client client = node.client();
try
{
CreateIndexRequestBuilder createIndexRequestBuilder =
client.admin().indices().prepareCreate("twitter2");
createIndexRequestBuilder.execute().actionGet();
}
catch (Exception e)
{
e.printStackTrace();
}
BulkRequestBuilder bulkRequest = client.prepareBulk();
int numOfDocs = 1000000;
long startTime = System.currentTimeMillis();
System.out.println("Going to add " + numOfDocs);
// either use client#prepare, or use Requests# to directly build
index/delete requests
long internalStartTime = System.currentTimeMillis();
for (int i = 0; i < numOfDocs; i++)
{
IndexRequestBuilder index =client.prepareIndex("twitter2", "tweet", "m"+i);
index.setSource(jsonBuilder()
.startObject()
.field("user", "kimchy" +i)
.field("postDate", new Date())
.field("message", "trying out Elasticsearch"+i)
.endObject());
bulkRequest.add(index);
if (i % 1000 == 0 )
{
BulkResponse bulkResponse = bulkRequest.execute().actionGet();
if (bulkResponse.hasFailures())
{
BulkItemResponse item = bulkResponse.getItems();
for (int j = 0; j< item.length; j++)
{
if (item[j].isFailed())
{
System.out.println ("Error " + item[j].getFailureMessage());
}
}
bulkRequest = client.prepareBulk();

}
System.out.println ("processed 1000 records from " + (i-1000) + " until
" + i + " at " + (System.currentTimeMillis() - internalStartTime));
internalStartTime = System.currentTimeMillis();
}
}

BulkResponse bulkResponse = bulkRequest.execute().actionGet();
if (bulkResponse.hasFailures())
{
BulkItemResponse item = bulkResponse.getItems();
for (int i = 0; i< item.length; i++)
{
if (item[i].isFailed())
{
System.out.println ("Error " + item[i].getFailureMessage());
}
}

}

On Thursday, November 6, 2014 12:12:58 PM UTC+2, selckin wrote:

On Thu, Nov 6, 2014 at 11:09 AM, Moshe Recanati re.m...@gmail.com
wrote:

// bulkRequest = client.prepareBulk();

Please fix your code to clearly only send 1000 in a bulk request.
Looks like you are just increasnig the size of the bulk request now and
executing it over and over

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/ddff5f78-ff08-4e2e-885d-602b11d85a89%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/ddff5f78-ff08-4e2e-885d-602b11d85a89%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHTr4ZuhmgZJbBzE_OZK139dHA3hvBdr3CemA72QeBwkQujROw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.