Upgrading from 0.20.6 to 0.90.0

Hi guys,

I'm trying to upgrade from 0.20.6 to 0.90.0, but am getting the following
error on my java client (webapp running from tomcat):

2013-05-10 18:11:34,280 INFO [main] (Log4jESLogger.java:109)
org.elasticsearch.client.transport - [Living Tribunal] failed to get node
info for [#transport#-1][inet[localhost/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.RemoteTransportException:
[localhost][inet[/127.0.0.1:9300]][cluster/nodes/info]
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 60
at
org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)

I have verified that my client is using the .90.0 libraries that came with
the .90.0 distribution. I also verified via the REST api that the ,90.0
node is running fine. Is there something that I need to do differently on
the client side when using 0.90.0? My guess is that somehow my webapp
client is still using some 0.20.6 artifact somehow.. but I don't see where
(I've cleaned and rebuilt the client env).

In the past I have successfully upgraded from 0.20.2 to 0.20.6.. Any
suggestions?

thanks
Ed

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Ed,

You've shutdown and restarted Tomcat?

I typically build my ES client services using Netty, and I've never had any
issues when upgrading (other than adapting to the relatively few API
changes to the source code). But for other Tomcat-SQL based web services,
it's usually safer and easier to shutdown and restart Tomcat when deploying
upgrades to those web services.

Otherwise, my journey from 0.19.4, 0.19.10, 0.20.4, and now 0.90.0 has been
flawless. Aside from the source code changes and deploying ES combined with
my Java code in the same package, I was careful to see that the 3-node
0.20.4 cluster was completely shutdown before the nodes were upgraded to
0.90.0 and then restarted. Not even a burp.

Just a thought. Hope you get your issue resolved soon!

Brian

On Friday, May 10, 2013 6:22:13 PM UTC-4, echin1999 wrote:

Hi guys,

I'm trying to upgrade from 0.20.6 to 0.90.0, but am getting the following
error on my java client (webapp running from tomcat):

2013-05-10 18:11:34,280 INFO [main] (Log4jESLogger.java:109)
org.elasticsearch.client.transport - [Living Tribunal] failed to get node
info for [#transport#-1][inet[localhost/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.RemoteTransportException:
[localhost][inet[/127.0.0.1:9300]][cluster/nodes/info]
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 60
at
org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)

I have verified that my client is using the .90.0 libraries that came with
the .90.0 distribution. I also verified via the REST api that the ,90.0
node is running fine. Is there something that I need to do differently on
the client side when using 0.90.0? My guess is that somehow my webapp
client is still using some 0.20.6 artifact somehow.. but I don't see where
(I've cleaned and rebuilt the client env).

In the past I have successfully upgraded from 0.20.2 to 0.20.6.. Any
suggestions?

thanks
Ed

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Do you see this issue when you use NodeInfo API? Or when the 'client' start?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 11 mai 2013 à 00:22, echin1999 echin1999@gmail.com a écrit :

Hi guys,

I'm trying to upgrade from 0.20.6 to 0.90.0, but am getting the following error on my java client (webapp running from tomcat):

2013-05-10 18:11:34,280 INFO [main] (Log4jESLogger.java:109) org.elasticsearch.client.transport - [Living Tribunal] failed to get node info for [#transport#-1][inet[localhost/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.RemoteTransportException: [localhost][inet[/127.0.0.1:9300]][cluster/nodes/info]
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit exceeded: 60
at org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)

I have verified that my client is using the .90.0 libraries that came with the .90.0 distribution. I also verified via the REST api that the ,90.0 node is running fine. Is there something that I need to do differently on the client side when using 0.90.0? My guess is that somehow my webapp client is still using some 0.20.6 artifact somehow.. but I don't see where (I've cleaned and rebuilt the client env).

In the past I have successfully upgraded from 0.20.2 to 0.20.6.. Any suggestions?

thanks
Ed

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Brian, I did do a restart of the tomcat server. I also removed the tmp
directory in the tomcat env just in case there was some lingering reference
to the prior version... As I mentioned i have done a successful upgrade
before, so i'm surprised i'm running into any issues. I know its isolated
to something i'm doing.. it just isn't obvious to me right now.. thx.

David, It seems to happen either when i construct the TransportClient, OR
when i call client.addTransportAddress (which i do immediately after
calling the constructor) .. I didn't bother to pinpoint exactly where.
i've reverted back to 0.20.6.. I can find out exactly if that makes a
difference. thx.

Here is the complete stack trace if it helps:

2013-05-10 18:11:33,810 INFO [main] (Log4jESLogger.java:104)
org.elasticsearch.plugins - [Living Tribunal] loaded [], sites []
2013-05-10 18:11:34,280 INFO [main] (Log4jESLogger.java:109)
org.elasticsearch.client.transport - [Living Tribunal] failed to get node
info for [#transport#-1][inet[localhost/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.RemoteTransportException:
[localhost][inet[/127.0.0.1:9300]][cluster/nodes/info]
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 60
at
org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)
at
org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(ChannelBufferStreamInput.java:132)
at
org.elasticsearch.common.io.stream.AdapterStreamInput.readByte(AdapterStreamInput.java:35)
at
org.elasticsearch.common.io.stream.StreamInput.readBoolean(StreamInput.java:252)
at
org.elasticsearch.action.admin.cluster.node.info.NodesInfoRequest.readFrom(NodesInfoRequest.java:234)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:207)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:111)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Are you sure that you really stopped all nodes?
Do you have nodes logs?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 11 mai 2013 à 06:25, echin1999 echin1999@gmail.com a écrit :

Brian, I did do a restart of the tomcat server. I also removed the tmp directory in the tomcat env just in case there was some lingering reference to the prior version... As I mentioned i have done a successful upgrade before, so i'm surprised i'm running into any issues. I know its isolated to something i'm doing.. it just isn't obvious to me right now.. thx.

David, It seems to happen either when i construct the TransportClient, OR when i call client.addTransportAddress (which i do immediately after calling the constructor) .. I didn't bother to pinpoint exactly where. i've reverted back to 0.20.6.. I can find out exactly if that makes a difference. thx.

Here is the complete stack trace if it helps:

2013-05-10 18:11:33,810 INFO [main] (Log4jESLogger.java:104) org.elasticsearch.plugins - [Living Tribunal] loaded [], sites []
2013-05-10 18:11:34,280 INFO [main] (Log4jESLogger.java:109) org.elasticsearch.client.transport - [Living Tribunal] failed to get node info for [#transport#-1][inet[localhost/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.RemoteTransportException: [localhost][inet[/127.0.0.1:9300]][cluster/nodes/info]
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit exceeded: 60
at org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)
at org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(ChannelBufferStreamInput.java:132)
at org.elasticsearch.common.io.stream.AdapterStreamInput.readByte(AdapterStreamInput.java:35)
at org.elasticsearch.common.io.stream.StreamInput.readBoolean(StreamInput.java:252)
at org.elasticsearch.action.admin.cluster.node.info.NodesInfoRequest.readFrom(NodesInfoRequest.java:234)
at org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:207)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:111)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I found the problem. And as expected, it was something stupid. I was so
hung up on making sure that the webapp running under the tomcat server was
using the correct 0.90.0 libs, I didn't bother to check a standalone
component that automatically gets built and is run in conjunction with the
web app. That standalone program was pulling in the old libs in addition
to the new libs. When examining the logs, I was looking at both the
webapp and standalone logs; not noticing that it was only an issue with one
and not the other.. ugg..
regardless, I appreciate the responses!!

thanks!

On Saturday, May 11, 2013 3:38:18 AM UTC-4, David Pilato wrote:

Are you sure that you really stopped all nodes?
Do you have nodes logs?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 11 mai 2013 à 06:25, echin1999 <echi...@gmail.com <javascript:>> a
écrit :

Brian, I did do a restart of the tomcat server. I also removed the tmp
directory in the tomcat env just in case there was some lingering reference
to the prior version... As I mentioned i have done a successful upgrade
before, so i'm surprised i'm running into any issues. I know its isolated
to something i'm doing.. it just isn't obvious to me right now.. thx.

David, It seems to happen either when i construct the TransportClient, OR
when i call client.addTransportAddress (which i do immediately after
calling the constructor) .. I didn't bother to pinpoint exactly where.
i've reverted back to 0.20.6.. I can find out exactly if that makes a
difference. thx.

Here is the complete stack trace if it helps:

2013-05-10 18:11:33,810 INFO [main] (Log4jESLogger.java:104)
org.elasticsearch.plugins - [Living Tribunal] loaded [], sites []
2013-05-10 18:11:34,280 INFO [main] (Log4jESLogger.java:109)
org.elasticsearch.client.transport - [Living Tribunal] failed to get node
info for [#transport#-1][inet[localhost/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.RemoteTransportException:
[localhost][inet[/127.0.0.1:9300]][cluster/nodes/info]
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 60
at
org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)
at
org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(ChannelBufferStreamInput.java:132)
at
org.elasticsearch.common.io.stream.AdapterStreamInput.readByte(AdapterStreamInput.java:35)
at
org.elasticsearch.common.io.stream.StreamInput.readBoolean(StreamInput.java:252)
at
org.elasticsearch.action.admin.cluster.node.info.NodesInfoRequest.readFrom(NodesInfoRequest.java:234)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:207)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:111)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.