Unicast instead of Multicast?

I am attempting to configure a unicast (probably TCP) cluster instead
of auto discovery with multicast. I am not familiar with JGroups at
all so this is all new to me. I've probably got it completely wrong
but I've pasted my configurations below. I also pasted the error I am
receiving trying this configuration. What would the proper
configuration look like to setup a unicast cluster with a preset list
of nodes?

Thanks for your help and for this great project!

Node 1:

network:
bindHost: 10.16.253.138
discovery:
jgroups:
config: tcp
tcpping:
initial_hosts: 10.20.124.107[9200]

Node 2:

network:
bindHost: 10.20.124.107
discovery:
jgroups:
config: tcp
tcpping:
initial_hosts: 10.16.253.138[9200]

[12:27:58,295][INFO ][server ] [Wind Dancer]
{ElasticSearch/0.4.0}: Initializing ...
[12:27:59,336][WARN ][jgroups.stack.Configurator] TCP property
skip_suspected_members was deprecated and is ignored
[12:27:59,699][INFO ][server ] [Wind Dancer]
{ElasticSearch/0.4.0}: Initialized
[12:27:59,699][INFO ][server ] [Wind Dancer]
{ElasticSearch/0.4.0}: Starting ...
[12:27:59,751][INFO ][transport ] [Wind Dancer]
boundAddress [inet[/10.20.124.107:9300]], publishAddress [inet[/
10.20.124.107:9300]]
[12:28:02,825][INFO ][cluster ] [Wind Dancer] New
Master [Wind Dancer][node2-41060][data][inet[/10.20.124.107:9300]]
[12:28:02,825][INFO ][discovery ] [Wind Dancer]
elasticsearch/node2-41060
[12:28:02,877][INFO ][http ] [Wind Dancer]
boundAddress [inet[/10.20.124.107:9200]], publishAddress [inet[/
10.20.124.107:9200]]
[12:28:58,575][WARN ][http.netty ] [Wind Dancer] Caught
exception while handling client http trafic
java.lang.IllegalArgumentException: empty text
at
org.jboss.netty.handler.codec.http.HttpVersion.(HttpVersion.java:
90)
at
org.jboss.netty.handler.codec.http.HttpVersion.valueOf(HttpVersion.java:
68)
at
org.jboss.netty.handler.codec.http.HttpRequestDecoder.createMessage(HttpRequestDecoder.java:
81)
at
org.jboss.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDecoder.java:
169)
at
org.jboss.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDecoder.java:
78)
at
org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:
454)
at
org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:
427)
at
org.jboss.netty.handler.timeout.ReadTimeoutHandler.messageReceived(ReadTimeoutHandler.java:
156)
at
org.elasticsearch.http.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:
49)
at
org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:
345)
at
org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:
332)
at
org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:323)
at
org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:
275)
at
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:196)
at
org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:
46)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:
1110)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)

Hi,

Your configuration is almost good. The problem is that you configure the
initial_hosts to point to elasticsearch transport port, and not jgourps
port. JGroups, by default with tcp, starts up port 9800. If you start
another instance on the same machine, then it will bind on the next port
(9801). So, in your case, your configuration should look something like
this:

Node 1:

network:
bindHost: 10.16.253.138
discovery:
jgroups:
config: tcp
tcpping:
initial_hosts:
10.20.124.107[9800],10.16.253.138[9800]

Node 2:

network:
bindHost: 10.20.124.107
discovery:
jgroups:
config: tcp
tcpping:
initial_hosts:
10.20.124.107[9800],10.16.253.138[9800]

Note, currently, changing the bind_port can't be controlled using the
configuration file. I am pushing a fix for this as we speak (there is a way
to override this, but its simpler to simply use master for now).

I will also update the docs to reflect this information.

-shay.banon

On Fri, Feb 26, 2010 at 8:35 PM, Philippe rhilgreen@gmail.com wrote:

I am attempting to configure a unicast (probably TCP) cluster instead
of auto discovery with multicast. I am not familiar with JGroups at
all so this is all new to me. I've probably got it completely wrong
but I've pasted my configurations below. I also pasted the error I am
receiving trying this configuration. What would the proper
configuration look like to setup a unicast cluster with a preset list
of nodes?

Thanks for your help and for this great project!

Node 1:

network:
bindHost: 10.16.253.138
discovery:
jgroups:
config: tcp
tcpping:
initial_hosts: 10.20.124.107[9200]

Node 2:

network:
bindHost: 10.20.124.107
discovery:
jgroups:
config: tcp
tcpping:
initial_hosts: 10.16.253.138[9200]

[12:27:58,295][INFO ][server ] [Wind Dancer]
{Elasticsearch/0.4.0}: Initializing ...
[12:27:59,336][WARN ][jgroups.stack.Configurator] TCP property
skip_suspected_members was deprecated and is ignored
[12:27:59,699][INFO ][server ] [Wind Dancer]
{Elasticsearch/0.4.0}: Initialized
[12:27:59,699][INFO ][server ] [Wind Dancer]
{Elasticsearch/0.4.0}: Starting ...
[12:27:59,751][INFO ][transport ] [Wind Dancer]
boundAddress [inet[/10.20.124.107:9300]], publishAddress [inet[/
10.20.124.107:9300]]
[12:28:02,825][INFO ][cluster ] [Wind Dancer] New
Master [Wind Dancer][node2-41060][data][inet[/10.20.124.107:9300]]
[12:28:02,825][INFO ][discovery ] [Wind Dancer]
elasticsearch/node2-41060
[12:28:02,877][INFO ][http ] [Wind Dancer]
boundAddress [inet[/10.20.124.107:9200]], publishAddress [inet[/
10.20.124.107:9200]]
[12:28:58,575][WARN ][http.netty ] [Wind Dancer] Caught
exception while handling client http trafic
java.lang.IllegalArgumentException: empty text
at
org.jboss.netty.handler.codec.http.HttpVersion.(HttpVersion.java:
90)
at
org.jboss.netty.handler.codec.http.HttpVersion.valueOf(HttpVersion.java:
68)
at

org.jboss.netty.handler.codec.http.HttpRequestDecoder.createMessage(HttpRequestDecoder.java:
81)
at

org.jboss.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDecoder.java:
169)
at

org.jboss.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDecoder.java:
78)
at

org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:
454)
at

org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:
427)
at

org.jboss.netty.handler.timeout.ReadTimeoutHandler.messageReceived(ReadTimeoutHandler.java:
156)
at

org.elasticsearch.http.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:
49)
at
org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:
345)
at
org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:
332)
at
org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:323)
at

org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:
275)
at
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:196)
at
org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:
46)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:
1110)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)

I attempted port 9800 without any luck. I checked netstat and noticed
the java process was not listening on 9800. 7800 was open though so I
gave that a shot and it worked.

Thanks for the help.

  • Philippe

On Feb 26, 9:33 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

Hi,

Your configuration is almost good. The problem is that you configure the
initial_hosts to point to elasticsearch transport port, and not jgourps
port. JGroups, by default with tcp, starts up port 9800. If you start
another instance on the same machine, then it will bind on the next port
(9801). So, in your case, your configuration should look something like
this:

Node 1:

network:
bindHost: 10.16.253.138
discovery:
jgroups:
config: tcp
tcpping:
initial_hosts:
10.20.124.107[9800],10.16.253.138[9800]

Node 2:

network:
bindHost: 10.20.124.107
discovery:
jgroups:
config: tcp
tcpping:
initial_hosts:
10.20.124.107[9800],10.16.253.138[9800]

Note, currently, changing the bind_port can't be controlled using the
configuration file. I am pushing a fix for this as we speak (there is a way
to override this, but its simpler to simply use master for now).

I will also update the docs to reflect this information.

-shay.banon

On Fri, Feb 26, 2010 at 8:35 PM, Philippe rhilgr...@gmail.com wrote:

I am attempting to configure a unicast (probably TCP) cluster instead
of auto discovery with multicast. I am not familiar with JGroups at
all so this is all new to me. I've probably got it completely wrong
but I've pasted my configurations below. I also pasted the error I am
receiving trying this configuration. What would the proper
configuration look like to setup a unicast cluster with a preset list
of nodes?

Thanks for your help and for this great project!

Node 1:

network:
bindHost: 10.16.253.138
discovery:
jgroups:
config: tcp
tcpping:
initial_hosts: 10.20.124.107[9200]

Node 2:

network:
bindHost: 10.20.124.107
discovery:
jgroups:
config: tcp
tcpping:
initial_hosts: 10.16.253.138[9200]

[12:27:58,295][INFO ][server ] [Wind Dancer]
{Elasticsearch/0.4.0}: Initializing ...
[12:27:59,336][WARN ][jgroups.stack.Configurator] TCP property
skip_suspected_members was deprecated and is ignored
[12:27:59,699][INFO ][server ] [Wind Dancer]
{Elasticsearch/0.4.0}: Initialized
[12:27:59,699][INFO ][server ] [Wind Dancer]
{Elasticsearch/0.4.0}: Starting ...
[12:27:59,751][INFO ][transport ] [Wind Dancer]
boundAddress [inet[/10.20.124.107:9300]], publishAddress [inet[/
10.20.124.107:9300]]
[12:28:02,825][INFO ][cluster ] [Wind Dancer] New
Master [Wind Dancer][node2-41060][data][inet[/10.20.124.107:9300]]
[12:28:02,825][INFO ][discovery ] [Wind Dancer]
elasticsearch/node2-41060
[12:28:02,877][INFO ][http ] [Wind Dancer]
boundAddress [inet[/10.20.124.107:9200]], publishAddress [inet[/
10.20.124.107:9200]]
[12:28:58,575][WARN ][http.netty ] [Wind Dancer] Caught
exception while handling client http trafic
java.lang.IllegalArgumentException: empty text
at
org.jboss.netty.handler.codec.http.HttpVersion.(HttpVersion.java:
90)
at
org.jboss.netty.handler.codec.http.HttpVersion.valueOf(HttpVersion.java:
68)
at

org.jboss.netty.handler.codec.http.HttpRequestDecoder.createMessage(HttpReq uestDecoder.java:
81)
at

org.jboss.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDec oder.java:
169)
at

org.jboss.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDec oder.java:
78)
at

org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingD ecoder.java:
454)
at

org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(Repla yingDecoder.java:
427)
at

org.jboss.netty.handler.timeout.ReadTimeoutHandler.messageReceived(ReadTime outHandler.java:
156)
at

org.elasticsearch.http.netty.OpenChannelsHandler.handleUpstream(OpenChannel sHandler.java:
49)
at
org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:
345)
at
org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:
332)
at
org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:323)
at

org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker. java:
275)
at
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:196)
at
org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:
46)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:
1110)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)

Sorry, I should have said 7800, not 9800. Great that it works.

-shay.banon

On Mon, Mar 1, 2010 at 4:16 PM, Philippe rhilgreen@gmail.com wrote:

I attempted port 9800 without any luck. I checked netstat and noticed
the java process was not listening on 9800. 7800 was open though so I
gave that a shot and it worked.

Thanks for the help.

  • Philippe

On Feb 26, 9:33 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

Hi,

Your configuration is almost good. The problem is that you configure
the
initial_hosts to point to elasticsearch transport port, and not jgourps
port. JGroups, by default with tcp, starts up port 9800. If you start
another instance on the same machine, then it will bind on the next port
(9801). So, in your case, your configuration should look something like
this:

Node 1:

network:
bindHost: 10.16.253.138
discovery:
jgroups:
config: tcp
tcpping:
initial_hosts:
10.20.124.107[9800],10.16.253.138[9800]

Node 2:

network:
bindHost: 10.20.124.107
discovery:
jgroups:
config: tcp
tcpping:
initial_hosts:
10.20.124.107[9800],10.16.253.138[9800]

Note, currently, changing the bind_port can't be controlled using the
configuration file. I am pushing a fix for this as we speak (there is a
way
to override this, but its simpler to simply use master for now).

I will also update the docs to reflect this information.

-shay.banon

On Fri, Feb 26, 2010 at 8:35 PM, Philippe rhilgr...@gmail.com wrote:

I am attempting to configure a unicast (probably TCP) cluster instead
of auto discovery with multicast. I am not familiar with JGroups at
all so this is all new to me. I've probably got it completely wrong
but I've pasted my configurations below. I also pasted the error I am
receiving trying this configuration. What would the proper
configuration look like to setup a unicast cluster with a preset list
of nodes?

Thanks for your help and for this great project!

Node 1:

network:
bindHost: 10.16.253.138
discovery:
jgroups:
config: tcp
tcpping:
initial_hosts: 10.20.124.107[9200]

Node 2:

network:
bindHost: 10.20.124.107
discovery:
jgroups:
config: tcp
tcpping:
initial_hosts: 10.16.253.138[9200]

[12:27:58,295][INFO ][server ] [Wind Dancer]
{Elasticsearch/0.4.0}: Initializing ...
[12:27:59,336][WARN ][jgroups.stack.Configurator] TCP property
skip_suspected_members was deprecated and is ignored
[12:27:59,699][INFO ][server ] [Wind Dancer]
{Elasticsearch/0.4.0}: Initialized
[12:27:59,699][INFO ][server ] [Wind Dancer]
{Elasticsearch/0.4.0}: Starting ...
[12:27:59,751][INFO ][transport ] [Wind Dancer]
boundAddress [inet[/10.20.124.107:9300]], publishAddress [inet[/
10.20.124.107:9300]]
[12:28:02,825][INFO ][cluster ] [Wind Dancer] New
Master [Wind Dancer][node2-41060][data][inet[/10.20.124.107:9300]]
[12:28:02,825][INFO ][discovery ] [Wind Dancer]
elasticsearch/node2-41060
[12:28:02,877][INFO ][http ] [Wind Dancer]
boundAddress [inet[/10.20.124.107:9200]], publishAddress [inet[/
10.20.124.107:9200]]
[12:28:58,575][WARN ][http.netty ] [Wind Dancer] Caught
exception while handling client http trafic
java.lang.IllegalArgumentException: empty text
at
org.jboss.netty.handler.codec.http.HttpVersion.(HttpVersion.java:
90)
at

org.jboss.netty.handler.codec.http.HttpVersion.valueOf(HttpVersion.java:

  1. at

org.jboss.netty.handler.codec.http.HttpRequestDecoder.createMessage(HttpReq
uestDecoder.java:

  1. at

org.jboss.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDec
oder.java:

  1. at

org.jboss.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDec
oder.java:

  1. at

org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingD
ecoder.java:

  1. at

org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(Repla
yingDecoder.java:

  1. at

org.jboss.netty.handler.timeout.ReadTimeoutHandler.messageReceived(ReadTime
outHandler.java:

  1. at

org.elasticsearch.http.netty.OpenChannelsHandler.handleUpstream(OpenChannel
sHandler.java:

  1. at
    org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:
  2. at
    org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:
  3. at
    org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:323)
    at

org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.
java:

  1. at
    org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:196)
    at

org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:

  1. at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:

  1. at java.util.concurrent.ThreadPoolExecutor
    $Worker.run(ThreadPoolExecutor.java:603)
    at java.lang.Thread.run(Thread.java:636)