How to use transport client (or something else) to search in ES indices on another system?

I searched and found that transport client should do the job. However, I don't know the settings I need to do for the same, both on the another system and my own.

Further, it seems that I just can't access any system and will require permission. But I couldn't find anything relevant for this.

I am using linux on both the system with elasticsearch 2.3.2, eclipse mars, java 7 built on maven.

Please ask if you need more clarification. Thanks in advance for helping.

Hi @Vicky2000,

the transport client docs should get you started.

The remote Elasticsearch cluster needs to bind to a network interface instead of localhost (you can verify that this is the case by issuing curl http://remote-cluster-ip:9200/. This should return basic cluster info. Otherwise the remote Elasticsearch cluster is not reachable.

Daniel

I tried the curl command mentioned by me. I didn't work. That means, remote ES cluster is unreachable. What changes should I make to the another pc so that I may access it from my system?
I feel that there must be some settings, as otherwise anyone knowing ip address of a system can access the cluster.

Hi @Vicky2000,

did you check that the cluster binds to a network interface as I suggested in my answer?

Daniel

Hi Daniel!

I had been away and just retried your suggestion.

Elasticsearch does't run when I bind the remote cluster to a network interface rather than localhost.

This is my ES log when I tried that:

[2016-06-20 14:57:28,021][WARN ][bootstrap ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[2016-06-20 14:57:28,021][WARN ][bootstrap ] This can result in part of the JVM being swapped out.
[2016-06-20 14:57:28,022][WARN ][bootstrap ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2016-06-20 14:57:28,022][WARN ][bootstrap ] These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'varun' mlockall
varun soft memlock unlimited
varun hard memlock unlimited
[2016-06-20 14:57:28,022][WARN ][bootstrap ] If you are logged in interactively, you will have to re-login for the new limits to take effect.
[2016-06-20 14:57:28,359][INFO ][node ] [varun] version[2.3.2], pid[11569], build[b9e4a6a/2016-04-21T16:03:47Z]
[2016-06-20 14:57:28,359][INFO ][node ] [varun] initializing ...
[2016-06-20 14:57:28,943][INFO ][plugins ] [varun] modules [lang-groovy, reindex, lang-expression], plugins [hq, head], sites [head, hq]
[2016-06-20 14:57:28,967][INFO ][env ] [varun] using [1] data paths, mounts [[/ (/dev/sda6)]], net usable_space [65.8gb], net total_space [88.5gb], spins? [possibly], types [ext4]
[2016-06-20 14:57:28,967][INFO ][env ] [varun] heap size [990.7mb], compressed ordinary object pointers [true]
Exception in thread "main" java.lang.IllegalArgumentException: Failed to resolve address for [node-1]
Likely root cause: java.net.UnknownHostException: node-1: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:922)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1316)
at java.net.InetAddress.getAllByName0(InetAddress.java:1269)
at java.net.InetAddress.getAllByName(InetAddress.java:1185)
at java.net.InetAddress.getAllByName(InetAddress.java:1119)
at org.elasticsearch.transport.netty.NettyTransport.parse(NettyTransport.java:733)
at org.elasticsearch.transport.netty.NettyTransport.addressesFromString(NettyTransport.java:685)
at org.elasticsearch.transport.TransportService.addressesFromString(TransportService.java:424)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.(UnicastZenPing.java:160)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at <<>>
at org.elasticsearch.node.Node.(Node.java:213)
at org.elasticsearch.node.Node.(Node.java:140)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:143)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:178)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)

Not all of them are problematic as this is my ES log when I bind it to localhost and it runs well:

[2016-06-20 14:59:01,901][WARN ][bootstrap ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[2016-06-20 14:59:01,902][WARN ][bootstrap ] This can result in part of the JVM being swapped out.
[2016-06-20 14:59:01,902][WARN ][bootstrap ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2016-06-20 14:59:01,902][WARN ][bootstrap ] These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'varun' mlockall
varun soft memlock unlimited
varun hard memlock unlimited
[2016-06-20 14:59:01,902][WARN ][bootstrap ] If you are logged in interactively, you will have to re-login for the new limits to take effect.
[2016-06-20 14:59:02,205][INFO ][node ] [varun] version[2.3.2], pid[11610], build[b9e4a6a/2016-04-21T16:03:47Z]
[2016-06-20 14:59:02,205][INFO ][node ] [varun] initializing ...
[2016-06-20 14:59:02,796][INFO ][plugins ] [varun] modules [lang-groovy, reindex, lang-expression], plugins [hq, head], sites [head, hq]

                                                                                                                                                         **CONT..............**

.... ES logs continued (bound to localhost)

[2016-06-20 14:59:02,820][INFO ][env ] [varun] using [1] data paths, mounts [[/ (/dev/sda6)]], net usable_space [65.8gb], net total_space [88.5gb], spins? [possibly], types [ext4]
[2016-06-20 14:59:02,820][INFO ][env ] [varun] heap size [990.7mb], compressed ordinary object pointers [true]
[2016-06-20 14:59:04,570][INFO ][node ] [varun] initialized
[2016-06-20 14:59:04,570][INFO ][node ] [varun] starting ...
[2016-06-20 14:59:04,629][INFO ][transport ] [varun] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2016-06-20 14:59:04,634][INFO ][discovery ] [varun] elasticsearch/pYQs1HlhTKiZ6JDMGUm6Bw
[2016-06-20 14:59:07,692][INFO ][cluster.service ] [varun] new_master {varun}{pYQs1HlhTKiZ6JDMGUm6Bw}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-06-20 14:59:07,708][INFO ][http ] [varun] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2016-06-20 14:59:07,708][INFO ][node ] [varun] started
[2016-06-20 14:59:07,894][INFO ][gateway ] [varun] recovered [1] indices into cluster_state
[2016-06-20 14:59:08,478][INFO ][cluster.routing.allocation] [varun] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
^C[2016-06-20 15:11:45,416][INFO ][node ] [varun] stopping ...
^C[2016-06-20 15:11:45,597][INFO ][node ] [varun] stopped
[2016-06-20 15:11:45,598][INFO ][node ] [varun] closing ...
[2016-06-20 15:11:45,661][INFO ][node ] [varun] closed
varun@varun-SVE14113ENW:~/elasticsearch-2.3.2/bin$ ./elasticsearch
[2016-06-20 15:12:49,245][WARN ][bootstrap ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[2016-06-20 15:12:49,245][WARN ][bootstrap ] This can result in part of the JVM being swapped out.
[2016-06-20 15:12:49,246][WARN ][bootstrap ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2016-06-20 15:12:49,246][WARN ][bootstrap ] These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'varun' mlockall
varun soft memlock unlimited
varun hard memlock unlimited
[2016-06-20 15:12:49,246][WARN ][bootstrap ] If you are logged in interactively, you will have to re-login for the new limits to take effect.
[2016-06-20 15:12:49,796][INFO ][node ] [varun] version[2.3.2], pid[12042], build[b9e4a6a/2016-04-21T16:03:47Z]
[2016-06-20 15:12:49,796][INFO ][node ] [varun] initializing ...
[2016-06-20 15:12:51,025][INFO ][plugins ] [varun] modules [lang-groovy, reindex, lang-expression], plugins [hq, head], sites [head, hq]
[2016-06-20 15:12:51,065][INFO ][env ] [varun] using [1] data paths, mounts [[/ (/dev/sda6)]], net usable_space [65.8gb], net total_space [88.5gb], spins? [possibly], types [ext4]
[2016-06-20 15:12:51,085][INFO ][env ] [varun] heap size [990.7mb], compressed ordinary object pointers [true]
[2016-06-20 15:12:55,432][INFO ][node ] [varun] initialized
[2016-06-20 15:12:55,432][INFO ][node ] [varun] starting ...
[2016-06-20 15:12:55,573][INFO ][transport ] [varun] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2016-06-20 15:12:55,578][INFO ][discovery ] [varun] elasticsearch/BZnOKoWXR8eyYX1GbO2LMg
[2016-06-20 15:12:58,643][INFO ][cluster.service ] [varun] new_master {varun}{BZnOKoWXR8eyYX1GbO2LMg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-06-20 15:12:58,672][INFO ][http ] [varun] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2016-06-20 15:12:58,672][INFO ][node ] [varun] started
[2016-06-20 15:12:59,011][INFO ][gateway ] [varun] recovered [1] indices into cluster_state
[2016-06-20 15:12:59,715][INFO ][cluster.routing.allocation] [varun] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).

Hi @Vicky2000,

did you see this?

I guess you have the following config in your elasticsearch.yml:

network :
    host : node-1

The host name has to be resolvable on the machine on which this Elasticsearch node runs. So you either:

  • Specify the concrete IP or just "all network interfaces" with 0.0.0.0
  • Add the host name to /etc/hosts (on the machine which runs this node) (note: I think this is unpractical but I still add it for completeness sake)
  • Configure an internal DNS server for resolving node names.

I'd recommend the first option to get started.

Daniel

1 Like

Setting the network host to 0.0.0.0 worked. I made a few more changes to the confuguration file and now I can use any of the two PCs as remote host. Thanks a lot @danielmitterdorfer .

:slight_smile:

Earlier I had set it to a a random id as was given in the link.