Percolate with bulk insert not working

Hi guys,

I am trying to bulk insert and percolate at the same time however, I do not
see the matches in the response. Here is the code i am dealing with:

TermQueryBuilder matchAll = QueryBuilders.termQuery("content", "hello");

client.prepareIndex("_percolator", "my-index", "percolator_1")
.setSource(matchAll.buildAsBytes()).setType("my_doc_type")
.setRefresh(true).execute().actionGet();

By above code i registered the percolator

Bulk insert:

BulkRequestBuilder bulkReq = client.prepareBulk();
while(in.hasNext())
{
jsonBuilder = buildContent(in.next()); // builds the my_doc_type

bulkReq.add(client.prepareIndex("my-index",
"my_doc_type").setSource(jsonBuilder).setPercolate("*"));
}

BulkResponse response = bulkReq.execute().actionGet();

for(BulkItemResponse item : response.responses)
{
if(!item.failed())
{
matches = ((IndexResponse) item.response()).matches();

System.out.println(matches); // print the matches
}
}

the above print statements are not printing "percolator_1" even though i am
seeding my_doc_type with field content = "hello".

Appreciate your help if you can tell me where I am doing wrong here?

thanks

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Any idea guys?

On Friday, June 14, 2013 5:22:40 PM UTC-7, vinod eligeti wrote:

Hi guys,

I am trying to bulk insert and percolate at the same time however, I do
not see the matches in the response. Here is the code i am dealing with:

TermQueryBuilder matchAll = QueryBuilders.termQuery("content", "hello");

client.prepareIndex("_percolator", "my-index", "percolator_1")
.setSource(matchAll.buildAsBytes()).setType("my_doc_type")
.setRefresh(true).execute().actionGet();

By above code i registered the percolator

Bulk insert:

BulkRequestBuilder bulkReq = client.prepareBulk();
while(in.hasNext())
{
jsonBuilder = buildContent(in.next()); // builds the my_doc_type

bulkReq.add(client.prepareIndex("my-index",
"my_doc_type").setSource(jsonBuilder).setPercolate("*"));
}

BulkResponse response = bulkReq.execute().actionGet();

for(BulkItemResponse item : response.responses)
{
if(!item.failed())
{
matches = ((IndexResponse) item.response()).matches();

System.out.println(matches); // print the matches
}
}

the above print statements are not printing "percolator_1" even though i
am seeding my_doc_type with field content = "hello".

Appreciate your help if you can tell me where I am doing wrong here?

thanks

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

More insight on this:

When I ran the percolate code:

=======================
TermQueryBuilder matchAll = QueryBuilders.termQuery("content", "hello");

client.prepareIndex("_percolator", "my-index", "percolator_1")
.setSource(matchAll.buildAsBytes()).setType("my_doc_type")
.setRefresh(true).execute().actionGet();

I see the following exception:

org.elasticsearch.ElasticSearchIllegalArgumentException: query must be
provided for percolate request
at
org.elasticsearch.common.Preconditions.checkArgument(Preconditions.java:95)
at
org.elasticsearch.index.percolator.PercolatorExecutor.addQuery(PercolatorExecutor.java:229)
at
org.elasticsearch.index.percolator.PercolatorExecutor.addQuery(PercolatorExecutor.java:191)
at
org.elasticsearch.index.percolator.PercolatorService$RealTimePercolatorOperationListener.postIndexUnderLock(PercolatorService.java:294)
at
org.elasticsearch.index.indexing.ShardIndexingService.postIndexUnderLock(ShardIndexingService.java:158)
at
org.elasticsearch.index.engine.robin.RobinEngine.innerIndex(RobinEngine.java:589)
at
org.elasticsearch.index.engine.robin.RobinEngine.index(RobinEngine.java:488)
at
org.elasticsearch.index.shard.service.InternalIndexShard.index(InternalIndexShard.java:330)
at
org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:207)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:532)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:430)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)

Why am I seeing this exception?

On Friday, June 14, 2013 5:22:40 PM UTC-7, vinod eligeti wrote:

Hi guys,

I am trying to bulk insert and percolate at the same time however, I do
not see the matches in the response. Here is the code i am dealing with:

TermQueryBuilder matchAll = QueryBuilders.termQuery("content", "hello");

client.prepareIndex("_percolator", "my-index", "percolator_1")
.setSource(matchAll.buildAsBytes()).setType("my_doc_type")
.setRefresh(true).execute().actionGet();

By above code i registered the percolator

Bulk insert:

BulkRequestBuilder bulkReq = client.prepareBulk();
while(in.hasNext())
{
jsonBuilder = buildContent(in.next()); // builds the my_doc_type

bulkReq.add(client.prepareIndex("my-index",
"my_doc_type").setSource(jsonBuilder).setPercolate("*"));
}

BulkResponse response = bulkReq.execute().actionGet();

for(BulkItemResponse item : response.responses)
{
if(!item.failed())
{
matches = ((IndexResponse) item.response()).matches();

System.out.println(matches); // print the matches
}
}

the above print statements are not printing "percolator_1" even though i
am seeding my_doc_type with field content = "hello".

Appreciate your help if you can tell me where I am doing wrong here?

thanks

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

My guess is that your matchAll will output something like:
"term" : {
"field1" : "value1"
}
And not
{
"query" : {
"term" : {
"field1" : "value1"
}
}
}
I'm wondering if example in Elasticsearch Platform — Find real-time answers at scale | Elastic is ok.
Tests show that the common usage is:
client().prepareIndex("_percolator", "test", "kuku")
.setSource(jsonBuilder().startObject()
.field("color", "blue")
.field("query", termQuery("field1", "value1"))
.endObject())
.setRefresh(true)
.execute().actionGet();

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 17 juin 2013 à 20:21, vinod eligeti veligeti999@gmail.com a écrit :

More insight on this:

When I ran the percolate code:

=======================
TermQueryBuilder matchAll = QueryBuilders.termQuery("content", "hello");

client.prepareIndex("_percolator", "my-index", "percolator_1")
.setSource(matchAll.buildAsBytes()).setType("my_doc_type")
.setRefresh(true).execute().actionGet();

I see the following exception:

org.elasticsearch.ElasticSearchIllegalArgumentException: query must be provided for percolate request
at org.elasticsearch.common.Preconditions.checkArgument(Preconditions.java:95)
at org.elasticsearch.index.percolator.PercolatorExecutor.addQuery(PercolatorExecutor.java:229)
at org.elasticsearch.index.percolator.PercolatorExecutor.addQuery(PercolatorExecutor.java:191)
at org.elasticsearch.index.percolator.PercolatorService$RealTimePercolatorOperationListener.postIndexUnderLock(PercolatorService.java:294)
at org.elasticsearch.index.indexing.ShardIndexingService.postIndexUnderLock(ShardIndexingService.java:158)
at org.elasticsearch.index.engine.robin.RobinEngine.innerIndex(RobinEngine.java:589)
at org.elasticsearch.index.engine.robin.RobinEngine.index(RobinEngine.java:488)
at org.elasticsearch.index.shard.service.InternalIndexShard.index(InternalIndexShard.java:330)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:207)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:532)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:430)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)

Why am I seeing this exception?

On Friday, June 14, 2013 5:22:40 PM UTC-7, vinod eligeti wrote:
Hi guys,

I am trying to bulk insert and percolate at the same time however, I do not see the matches in the response. Here is the code i am dealing with:

TermQueryBuilder matchAll = QueryBuilders.termQuery("content", "hello");

client.prepareIndex("_percolator", "my-index", "percolator_1")
.setSource(matchAll.buildAsBytes()).setType("my_doc_type")
.setRefresh(true).execute().actionGet();

By above code i registered the percolator

Bulk insert:

BulkRequestBuilder bulkReq = client.prepareBulk();
while(in.hasNext())
{
jsonBuilder = buildContent(in.next()); // builds the my_doc_type

bulkReq.add(client.prepareIndex("my-index", "my_doc_type").setSource(jsonBuilder).setPercolate("*"));
}

BulkResponse response = bulkReq.execute().actionGet();

for(BulkItemResponse item : response.responses)
{
if(!item.failed())
{
matches = ((IndexResponse) item.response()).matches();

  System.out.println(matches); // print the matches

}
}

the above print statements are not printing "percolator_1" even though i am seeding my_doc_type with field content = "hello".

Appreciate your help if you can tell me where I am doing wrong here?

thanks

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

You are right David. When I printed the bytes as string it should the first
form. How do I get into the second form?

I followed exactly what is specified in the documentation at:

Is it because I misunderstood the documentation or something is missing in
the docs?

thanks

On Monday, June 17, 2013 11:28:57 AM UTC-7, David Pilato wrote:

My guess is that your matchAll will output something like:

"term" : {
"field1" : "value1"
}

And not

{
"query" : {
"term" : {
"field1" : "value1"
}
}
}

I'm wondering if example in
Elasticsearch Platform — Find real-time answers at scale | Elastic is ok.
Tests show that the common usage is:
client().prepareIndex("_percolator", "test", "kuku")
.setSource(jsonBuilder().startObject()
.field("color", "blue")
.field("query", termQuery("field1", "value1"))
.endObject())
.setRefresh(true)
.execute().actionGet();

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr
| @scrutmydocs https://twitter.com/scrutmydocs

Le 17 juin 2013 à 20:21, vinod eligeti <velig...@gmail.com <javascript:>>
a écrit :

More insight on this:

When I ran the percolate code:

=======================
TermQueryBuilder matchAll = QueryBuilders.termQuery("content", "hello");

client.prepareIndex("_percolator", "my-index", "percolator_1")
.setSource(matchAll.buildAsBytes()).setType("my_doc_type")
.setRefresh(true).execute().actionGet();

I see the following exception:

org.elasticsearch.ElasticSearchIllegalArgumentException: query must be
provided for percolate request
at
org.elasticsearch.common.Preconditions.checkArgument(Preconditions.java:95)
at
org.elasticsearch.index.percolator.PercolatorExecutor.addQuery(PercolatorExecutor.java:229)
at
org.elasticsearch.index.percolator.PercolatorExecutor.addQuery(PercolatorExecutor.java:191)
at
org.elasticsearch.index.percolator.PercolatorService$RealTimePercolatorOperationListener.postIndexUnderLock(PercolatorService.java:294)
at
org.elasticsearch.index.indexing.ShardIndexingService.postIndexUnderLock(ShardIndexingService.java:158)
at
org.elasticsearch.index.engine.robin.RobinEngine.innerIndex(RobinEngine.java:589)
at
org.elasticsearch.index.engine.robin.RobinEngine.index(RobinEngine.java:488)
at
org.elasticsearch.index.shard.service.InternalIndexShard.index(InternalIndexShard.java:330)
at
org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:207)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:532)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:430)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)

Why am I seeing this exception?

On Friday, June 14, 2013 5:22:40 PM UTC-7, vinod eligeti wrote:

Hi guys,

I am trying to bulk insert and percolate at the same time however, I do
not see the matches in the response. Here is the code i am dealing with:

TermQueryBuilder matchAll = QueryBuilders.termQuery("content", "hello");

client.prepareIndex("_percolator", "my-index", "percolator_1")
.setSource(matchAll.buildAsBytes()).setType("my_doc_type")
.setRefresh(true).execute().actionGet();

By above code i registered the percolator

Bulk insert:

BulkRequestBuilder bulkReq = client.prepareBulk();
while(in.hasNext())
{
jsonBuilder = buildContent(in.next()); // builds the my_doc_type

bulkReq.add(client.prepareIndex("my-index",
"my_doc_type").setSource(jsonBuilder).setPercolate("*"));
}

BulkResponse response = bulkReq.execute().actionGet();

for(BulkItemResponse item : response.responses)
{
if(!item.failed())
{
matches = ((IndexResponse) item.response()).matches();

System.out.println(matches); // print the matches
}
}

the above print statements are not printing "percolator_1" even though i
am seeding my_doc_type with field content = "hello".

Appreciate your help if you can tell me where I am doing wrong here?

thanks

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I think it's an issue in doc. Opened issue here: https://github.com/elasticsearch/elasticsearch.github.com/issues/449

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 17 juin 2013 à 20:36, vinod eligeti veligeti999@gmail.com a écrit :

You are right David. When I printed the bytes as string it should the first form. How do I get into the second form?

I followed exactly what is specified in the documentation at:
Elasticsearch Platform — Find real-time answers at scale | Elastic

Is it because I misunderstood the documentation or something is missing in the docs?

thanks

On Monday, June 17, 2013 11:28:57 AM UTC-7, David Pilato wrote:
My guess is that your matchAll will output something like:
"term" : {
"field1" : "value1"
}
And not
{
"query" : {
"term" : {
"field1" : "value1"
}
}
}
I'm wondering if example in Elasticsearch Platform — Find real-time answers at scale | Elastic is ok.
Tests show that the common usage is:
client().prepareIndex("_percolator", "test", "kuku")
.setSource(jsonBuilder().startObject()
.field("color", "blue")
.field("query", termQuery("field1", "value1"))
.endObject())
.setRefresh(true)
.execute().actionGet();

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 17 juin 2013 à 20:21, vinod eligeti velig...@gmail.com a écrit :

More insight on this:

When I ran the percolate code:

=======================
TermQueryBuilder matchAll = QueryBuilders.termQuery("content", "hello");

client.prepareIndex("_percolator", "my-index", "percolator_1")
.setSource(matchAll.buildAsBytes()).setType("my_doc_type")
.setRefresh(true).execute().actionGet();

I see the following exception:

org.elasticsearch.ElasticSearchIllegalArgumentException: query must be provided for percolate request
at org.elasticsearch.common.Preconditions.checkArgument(Preconditions.java:95)
at org.elasticsearch.index.percolator.PercolatorExecutor.addQuery(PercolatorExecutor.java:229)
at org.elasticsearch.index.percolator.PercolatorExecutor.addQuery(PercolatorExecutor.java:191)
at org.elasticsearch.index.percolator.PercolatorService$RealTimePercolatorOperationListener.postIndexUnderLock(PercolatorService.java:294)
at org.elasticsearch.index.indexing.ShardIndexingService.postIndexUnderLock(ShardIndexingService.java:158)
at org.elasticsearch.index.engine.robin.RobinEngine.innerIndex(RobinEngine.java:589)
at org.elasticsearch.index.engine.robin.RobinEngine.index(RobinEngine.java:488)
at org.elasticsearch.index.shard.service.InternalIndexShard.index(InternalIndexShard.java:330)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:207)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:532)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:430)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)

Why am I seeing this exception?

On Friday, June 14, 2013 5:22:40 PM UTC-7, vinod eligeti wrote:
Hi guys,

I am trying to bulk insert and percolate at the same time however, I do not see the matches in the response. Here is the code i am dealing with:

TermQueryBuilder matchAll = QueryBuilders.termQuery("content", "hello");

client.prepareIndex("_percolator", "my-index", "percolator_1")
.setSource(matchAll.buildAsBytes()).setType("my_doc_type")
.setRefresh(true).execute().actionGet();

By above code i registered the percolator

Bulk insert:

BulkRequestBuilder bulkReq = client.prepareBulk();
while(in.hasNext())
{
jsonBuilder = buildContent(in.next()); // builds the my_doc_type

bulkReq.add(client.prepareIndex("my-index", "my_doc_type").setSource(jsonBuilder).setPercolate("*"));
}

BulkResponse response = bulkReq.execute().actionGet();

for(BulkItemResponse item : response.responses)
{
if(!item.failed())
{
matches = ((IndexResponse) item.response()).matches();

  System.out.println(matches); // print the matches

}
}

the above print statements are not printing "percolator_1" even though i am seeding my_doc_type with field content = "hello".

Appreciate your help if you can tell me where I am doing wrong here?

thanks

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Thanks that works.

Also a minor suggestion since I was not looking at the server logs I didnt
get any idea as why my request is failing. BulKRequest has nice API where I
know whether there is a failure or not, similarly it would be nice to have
in these APIs whether the request is correct or not.

thanks

On Monday, June 17, 2013 11:37:36 AM UTC-7, David Pilato wrote:

I think it's an issue in doc. Opened issue here:
https://github.com/elasticsearch/elasticsearch.github.com/issues/449

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr
| @scrutmydocs https://twitter.com/scrutmydocs

Le 17 juin 2013 à 20:36, vinod eligeti <velig...@gmail.com <javascript:>>
a écrit :

You are right David. When I printed the bytes as string it should the
first form. How do I get into the second form?

I followed exactly what is specified in the documentation at:
Elasticsearch Platform — Find real-time answers at scale | Elastic

Is it because I misunderstood the documentation or something is missing in
the docs?

thanks

On Monday, June 17, 2013 11:28:57 AM UTC-7, David Pilato wrote:

My guess is that your matchAll will output something like:

"term" : {
"field1" : "value1"
}

And not

{
"query" : {
"term" : {
"field1" : "value1"
}
}
}

I'm wondering if example in
Elasticsearch Platform — Find real-time answers at scale | Elastic is ok.
Tests show that the common usage is:
client().prepareIndex("_percolator", "test", "kuku")
.setSource(jsonBuilder().startObject()
.field("color", "blue")
.field("query", termQuery("field1", "value1"))
.endObject())
.setRefresh(true)
.execute().actionGet();

--
David Pilato | Technical Advocate | *Elasticsearch.comhttp://elasticsearch.com/
*
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr
| @scrutmydocs https://twitter.com/scrutmydocs

Le 17 juin 2013 à 20:21, vinod eligeti velig...@gmail.com a écrit :

More insight on this:

When I ran the percolate code:

=======================
TermQueryBuilder matchAll = QueryBuilders.termQuery("content", "hello");

client.prepareIndex("_percolator", "my-index", "percolator_1")
.setSource(matchAll.buildAsBytes()).setType("my_doc_type")
.setRefresh(true).execute().actionGet();

I see the following exception:

org.elasticsearch.ElasticSearchIllegalArgumentException: query must be
provided for percolate request
at
org.elasticsearch.common.Preconditions.checkArgument(Preconditions.java:95)
at
org.elasticsearch.index.percolator.PercolatorExecutor.addQuery(PercolatorExecutor.java:229)
at
org.elasticsearch.index.percolator.PercolatorExecutor.addQuery(PercolatorExecutor.java:191)
at
org.elasticsearch.index.percolator.PercolatorService$RealTimePercolatorOperationListener.postIndexUnderLock(PercolatorService.java:294)
at
org.elasticsearch.index.indexing.ShardIndexingService.postIndexUnderLock(ShardIndexingService.java:158)
at
org.elasticsearch.index.engine.robin.RobinEngine.innerIndex(RobinEngine.java:589)
at
org.elasticsearch.index.engine.robin.RobinEngine.index(RobinEngine.java:488)
at
org.elasticsearch.index.shard.service.InternalIndexShard.index(InternalIndexShard.java:330)
at
org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:207)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:532)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:430)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)

Why am I seeing this exception?

On Friday, June 14, 2013 5:22:40 PM UTC-7, vinod eligeti wrote:

Hi guys,

I am trying to bulk insert and percolate at the same time however, I do
not see the matches in the response. Here is the code i am dealing with:

TermQueryBuilder matchAll = QueryBuilders.termQuery("content", "hello");

client.prepareIndex("_percolator", "my-index", "percolator_1")
.setSource(matchAll.buildAsBytes()).setType("my_doc_type")
.setRefresh(true).execute().actionGet();

By above code i registered the percolator

Bulk insert:

BulkRequestBuilder bulkReq = client.prepareBulk();
while(in.hasNext())
{
jsonBuilder = buildContent(in.next()); // builds the my_doc_type

bulkReq.add(client.prepareIndex("my-index",
"my_doc_type").setSource(jsonBuilder).setPercolate("*"));
}

BulkResponse response = bulkReq.execute().actionGet();

for(BulkItemResponse item : response.responses)
{
if(!item.failed())
{
matches = ((IndexResponse) item.response()).matches();

System.out.println(matches); // print the matches
}
}

the above print statements are not printing "percolator_1" even though i
am seeding my_doc_type with field content = "hello".

Appreciate your help if you can tell me where I am doing wrong here?

thanks

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I don't get this last part.
Did you mean that on Client side, you did not get any error or exception back?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 17 juin 2013 à 20:53, vinod eligeti veligeti999@gmail.com a écrit :

Thanks that works.

Also a minor suggestion since I was not looking at the server logs I didnt get any idea as why my request is failing. BulKRequest has nice API where I know whether there is a failure or not, similarly it would be nice to have in these APIs whether the request is correct or not.

thanks

On Monday, June 17, 2013 11:37:36 AM UTC-7, David Pilato wrote:
I think it's an issue in doc. Opened issue here: https://github.com/elasticsearch/elasticsearch.github.com/issues/449

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 17 juin 2013 à 20:36, vinod eligeti velig...@gmail.com a écrit :

You are right David. When I printed the bytes as string it should the first form. How do I get into the second form?

I followed exactly what is specified in the documentation at:
Elasticsearch Platform — Find real-time answers at scale | Elastic

Is it because I misunderstood the documentation or something is missing in the docs?

thanks

On Monday, June 17, 2013 11:28:57 AM UTC-7, David Pilato wrote:
My guess is that your matchAll will output something like:
"term" : {
"field1" : "value1"
}
And not
{
"query" : {
"term" : {
"field1" : "value1"
}
}
}
I'm wondering if example in Elasticsearch Platform — Find real-time answers at scale | Elastic is ok.
Tests show that the common usage is:
client().prepareIndex("_percolator", "test", "kuku")
.setSource(jsonBuilder().startObject()
.field("color", "blue")
.field("query", termQuery("field1", "value1"))
.endObject())
.setRefresh(true)
.execute().actionGet();

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr | @scrutmydocs

Le 17 juin 2013 à 20:21, vinod eligeti velig...@gmail.com a écrit :

More insight on this:

When I ran the percolate code:

=======================
TermQueryBuilder matchAll = QueryBuilders.termQuery("content", "hello");

client.prepareIndex("_percolator", "my-index", "percolator_1")
.setSource(matchAll.buildAsBytes()).setType("my_doc_type")
.setRefresh(true).execute().actionGet();

I see the following exception:

org.elasticsearch.ElasticSearchIllegalArgumentException: query must be provided for percolate request
at org.elasticsearch.common.Preconditions.checkArgument(Preconditions.java:95)
at org.elasticsearch.index.percolator.PercolatorExecutor.addQuery(PercolatorExecutor.java:229)
at org.elasticsearch.index.percolator.PercolatorExecutor.addQuery(PercolatorExecutor.java:191)
at org.elasticsearch.index.percolator.PercolatorService$RealTimePercolatorOperationListener.postIndexUnderLock(PercolatorService.java:294)
at org.elasticsearch.index.indexing.ShardIndexingService.postIndexUnderLock(ShardIndexingService.java:158)
at org.elasticsearch.index.engine.robin.RobinEngine.innerIndex(RobinEngine.java:589)
at org.elasticsearch.index.engine.robin.RobinEngine.index(RobinEngine.java:488)
at org.elasticsearch.index.shard.service.InternalIndexShard.index(InternalIndexShard.java:330)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:207)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:532)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:430)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)

Why am I seeing this exception?

On Friday, June 14, 2013 5:22:40 PM UTC-7, vinod eligeti wrote:
Hi guys,

I am trying to bulk insert and percolate at the same time however, I do not see the matches in the response. Here is the code i am dealing with:

TermQueryBuilder matchAll = QueryBuilders.termQuery("content", "hello");

client.prepareIndex("_percolator", "my-index", "percolator_1")
.setSource(matchAll.buildAsBytes()).setType("my_doc_type")
.setRefresh(true).execute().actionGet();

By above code i registered the percolator

Bulk insert:

BulkRequestBuilder bulkReq = client.prepareBulk();
while(in.hasNext())
{
jsonBuilder = buildContent(in.next()); // builds the my_doc_type

bulkReq.add(client.prepareIndex("my-index", "my_doc_type").setSource(jsonBuilder).setPercolate("*"));
}

BulkResponse response = bulkReq.execute().actionGet();

for(BulkItemResponse item : response.responses)
{
if(!item.failed())
{
matches = ((IndexResponse) item.response()).matches();

  System.out.println(matches); // print the matches

}
}

the above print statements are not printing "percolator_1" even though i am seeding my_doc_type with field content = "hello".

Appreciate your help if you can tell me where I am doing wrong here?

thanks

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Sorry for the confusion, Yes i do not get any exception back at the client
side when I specified the invalid query.

On Tue, Jun 18, 2013 at 12:11 AM, David Pilato david@pilato.fr wrote:

I don't get this last part.
Did you mean that on Client side, you did not get any error or exception
back?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr
| @scrutmydocs https://twitter.com/scrutmydocs

Le 17 juin 2013 à 20:53, vinod eligeti veligeti999@gmail.com a écrit :

Thanks that works.

Also a minor suggestion since I was not looking at the server logs I didnt
get any idea as why my request is failing. BulKRequest has nice API where I
know whether there is a failure or not, similarly it would be nice to have
in these APIs whether the request is correct or not.

thanks

On Monday, June 17, 2013 11:37:36 AM UTC-7, David Pilato wrote:

I think it's an issue in doc. Opened issue here: https://github.com/**
elasticsearch/elasticsearch.**github.com/issues/449https://github.com/elasticsearch/elasticsearch.github.com/issues/449

--
David Pilato | Technical Advocate | Elasticsearch.comhttp://elasticsearch.com/
*
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr
|
* @scrutmydocs https://twitter.com/scrutmydocs

Le 17 juin 2013 à 20:36, vinod eligeti velig...@gmail.com a écrit :

You are right David. When I printed the bytes as string it should the
first form. How do I get into the second form?

I followed exactly what is specified in the documentation at:
Elasticsearch Platform — Find real-time answers at scale | Elastichttp://www.elasticsearch.org/guide/reference/java-api/percolate/

Is it because I misunderstood the documentation or something is missing
in the docs?

thanks

On Monday, June 17, 2013 11:28:57 AM UTC-7, David Pilato wrote:

My guess is that your matchAll will output something like:

"term" : {
"field1" : "value1"
}

And not

{
"query" : {
"term" : {
"field1" : "value1"
}
}
}

I'm wondering if example in http://www.elasticsearch.**
org/guide/reference/java-api/**percolate/http://www.elasticsearch.org/guide/reference/java-api/percolate/ is
ok.
Tests show that the common usage is:
client().prepareIndex("_**percolator", "test", "kuku")
.setSource(jsonBuilder().**startObject()
.field("color", "blue")
.field("query", termQuery("field1", "value1"))
.endObject())
.setRefresh(true)
.execute().actionGet();

--
David Pilato | Technical Advocate | Elasticsearch.comhttp://elasticsearch.com/
*
@dadoonet https://twitter.com/dadoonet | @elasticsearchfrhttps://twitter.com/elasticsearchfr
|
* @scrutmydocs https://twitter.com/scrutmydocs

Le 17 juin 2013 à 20:21, vinod eligeti velig...@gmail.com a écrit :

More insight on this:

When I ran the percolate code:

=======================
TermQueryBuilder matchAll = QueryBuilders.termQuery("**content",
"hello");

client.prepareIndex("**percolator", "my-index", "percolator_1")
.setSource(matchAll.**buildAsBytes()).setType("my
**doc_type")
.setRefresh(true).execute().**actionGet();

I see the following exception:

org.elasticsearch.ElasticSearchIllegalArgumentException: query must
be provided for percolate request
at org.elasticsearch.common.Preconditions.checkArgument(
Preconditions.java:95)
at org.elasticsearch.index.percolator.PercolatorExecutor.
addQuery(PercolatorExecutor.java:229)
at org.elasticsearch.index.percolator.PercolatorExecutor.
addQuery(PercolatorExecutor.java:191)
at org.elasticsearch.index.percolator.PercolatorService$
RealTimePercolatorOperationLis
tener.postIndexUnderLock(

PercolatorService.java:294)
at org.elasticsearch.index.indexing.ShardIndexingService.
postIndexUnderLock(**ShardIndexingService.java:158)
at org.elasticsearch.index.engine.robin.RobinEngine.
innerIndex(RobinEngine.java:*589)
at org.elasticsearch.index.engine.robin.RobinEngine.
index(RobinEngine.java:488)
at org.elasticsearch.index.shard.service.InternalIndexShard.
index(InternalIndexShard.java:330)
at org.elasticsearch.action.index.TransportIndexAction.
shardOperationOnPrimary(TransportIndexAction.java:207)
at org.elasticsearch.action.support.replication.
TransportShardReplicationOpera
tionAction$AsyncShardOperationAction.
*performOnPrimary(TransportShardReplicationOperationAction.java:532)
at org.elasticsearch.action.support.replication.
TransportShardReplicationOpera
tionAction$

AsyncShardOperationAction$1.**run(TransportShardReplicationOpera
tionAction.java:430)
at java.util.concurrent.ThreadPoolExecutor$Worker.
runTask(ThreadPoolExecutor.**java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.**java:680)

Why am I seeing this exception?

On Friday, June 14, 2013 5:22:40 PM UTC-7, vinod eligeti wrote:

Hi guys,

I am trying to bulk insert and percolate at the same time however, I do
not see the matches in the response. Here is the code i am dealing with:

TermQueryBuilder matchAll = QueryBuilders.termQuery("**content",
"hello");

client.prepareIndex("**percolator", "my-index", "percolator_1")
.setSource(matchAll.**buildAsBytes()).setType("my
**doc_type")
.setRefresh(true).execute().**actionGet();

By above code i registered the percolator

Bulk insert:

BulkRequestBuilder bulkReq = client.prepareBulk();
while(in.hasNext())
{
jsonBuilder = buildContent(in.next()); // builds the my_doc_type

bulkReq.add(client.**prepareIndex("my-index", "my_doc_type").setSource(
jsonBuilder).setPercolate("*"));
}

BulkResponse response = bulkReq.execute().actionGet();

for(BulkItemResponse item : response.responses)
{
if(!item.failed())
{
matches = ((IndexResponse) item.response()).matches();

System.out.println(matches); // print the matches
}
}

the above print statements are not printing "percolator_1" even though
i am seeding my_doc_type with field content = "hello".

Appreciate your help if you can tell me where I am doing wrong here?

thanks

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.**com.
For more options, visit https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@**googlegroups.com.
For more options, visit https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.