we are using elasticsearch for some time now and are very impressed how
good it works.
We are using ES in combination with rabbitmq and indexing around 1 million
documents a day over this rabbitmq-river into ES.
It works great! But from time to time (once a week) we get this "warning"
in the logifle:
[2012-04-09 12:19:15,724][WARN ][river.rabbitmq ] [Malekith the
Accursed] [rabbitmq][my_river8] failed to parse request for delivery tag
[2965504], ack'ing...
at
org.elasticsearch.river.rabbitmq.RabbitmqRiver$Consumer.run(RabbitmqRiver.java:240)
at
org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:147)
at
org.elasticsearch.action.bulk.BulkRequest.add(BulkRequest.java:92)
at
org.elasticsearch.action.bulk.BulkRequestBuilder.add(BulkRequestBuilder.java:81)
at
org.elasticsearch.river.rabbitmq.RabbitmqRiver$Consumer.run(RabbitmqRiver.java:240)
at java.lang.Thread.run(Thread.java:636)
And then the river stops to bulk insert documents into ES. We always have
to restart the river by deleting the river and making it new, some times
even under a different name, otherwise it will not connect to the rabbitmq
server again.
Strange... . I can see that it acks the failed message (which happens when
the format of the bulk indexing message fails to parse). What I can see is
that in this case, a delivery tag will be ack'ed twice, which I am not
sure, once for the failure, and another later on, I can fix that, though I
am not sure why this will cause rabbitmq to stop sending messages...
we are using elasticsearch for some time now and are very impressed how
good it works.
We are using ES in combination with rabbitmq and indexing around 1 million
documents a day over this rabbitmq-river into ES.
It works great! But from time to time (once a week) we get this "warning"
in the logifle:
[2012-04-09 12:19:15,724][WARN ][river.rabbitmq ] [Malekith the
Accursed] [rabbitmq][my_river8] failed to parse request for delivery tag
[2965504], ack'ing...
at
org.elasticsearch.river.rabbitmq.RabbitmqRiver$Consumer.run(RabbitmqRiver.java:240)
at
org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:147)
at
org.elasticsearch.action.bulk.BulkRequest.add(BulkRequest.java:92)
at
org.elasticsearch.action.bulk.BulkRequestBuilder.add(BulkRequestBuilder.java:81)
at
org.elasticsearch.river.rabbitmq.RabbitmqRiver$Consumer.run(RabbitmqRiver.java:240)
at java.lang.Thread.run(Thread.java:636)
And then the river stops to bulk insert documents into ES. We always have
to restart the river by deleting the river and making it new, some times
even under a different name, otherwise it will not connect to the rabbitmq
server again.
On Wed, Apr 11, 2012 at 12:39 PM, Shay Banon kimchy@gmail.com wrote:
Strange... . I can see that it acks the failed message (which happens when
the format of the bulk indexing message fails to parse). What I can see is
that in this case, a delivery tag will be ack'ed twice, which I am not
sure, once for the failure, and another later on, I can fix that, though I
am not sure why this will cause rabbitmq to stop sending messages...
we are using elasticsearch for some time now and are very impressed how
good it works.
We are using ES in combination with rabbitmq and indexing around 1
million documents a day over this rabbitmq-river into ES.
It works great! But from time to time (once a week) we get this "warning"
in the logifle:
[2012-04-09 12:19:15,724][WARN ][river.rabbitmq ] [Malekith the
Accursed] [rabbitmq][my_river8] failed to parse request for delivery tag
[2965504], ack'ing...
at
org.elasticsearch.river.rabbitmq.RabbitmqRiver$Consumer.run(RabbitmqRiver.java:240)
at
org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:147)
at
org.elasticsearch.action.bulk.BulkRequest.add(BulkRequest.java:92)
at
org.elasticsearch.action.bulk.BulkRequestBuilder.add(BulkRequestBuilder.java:81)
at
org.elasticsearch.river.rabbitmq.RabbitmqRiver$Consumer.run(RabbitmqRiver.java:240)
at java.lang.Thread.run(Thread.java:636)
And then the river stops to bulk insert documents into ES. We always have
to restart the river by deleting the river and making it new, some times
even under a different name, otherwise it will not connect to the rabbitmq
server again.
On Wed, Apr 11, 2012 at 12:39 PM, Shay Banon kimchy@gmail.com wrote:
Strange... . I can see that it acks the failed message (which happens
when the format of the bulk indexing message fails to parse). What I can
see is that in this case, a delivery tag will be ack'ed twice, which I am
not sure, once for the failure, and another later on, I can fix that,
though I am not sure why this will cause rabbitmq to stop sending
messages...
we are using elasticsearch for some time now and are very impressed how
good it works.
We are using ES in combination with rabbitmq and indexing around 1
million documents a day over this rabbitmq-river into ES.
It works great! But from time to time (once a week) we get this
"warning" in the logifle:
[2012-04-09 12:19:15,724][WARN ][river.rabbitmq ] [Malekith
the Accursed] [rabbitmq][my_river8] failed to parse request for delivery
tag [2965504], ack'ing...
at
org.elasticsearch.river.rabbitmq.RabbitmqRiver$Consumer.run(RabbitmqRiver.java:240)
at
org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:147)
at
org.elasticsearch.action.bulk.BulkRequest.add(BulkRequest.java:92)
at
org.elasticsearch.action.bulk.BulkRequestBuilder.add(BulkRequestBuilder.java:81)
at
org.elasticsearch.river.rabbitmq.RabbitmqRiver$Consumer.run(RabbitmqRiver.java:240)
at java.lang.Thread.run(Thread.java:636)
And then the river stops to bulk insert documents into ES. We always
have to restart the river by deleting the river and making it new, some
times even under a different name, otherwise it will not connect to the
rabbitmq server again.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.