Connecting to a Elasticsearch running on a remote host

Hi,

I want to connect to an Elasticsearch running on a remote host. My code is as follows

        String host = "10.169.149.134";
        byte[]ip = new byte[]{(byte)10, (byte) 169, (byte) 149, (byte) 134};
        int port = 9300;
        Settings settings = Settings.builder()
                .put("cluster.name", "docker-cluster").build();
//        TransportClient client = new PreBuiltTransportClient(settings)
//                .addTransportAddress(new TransportAddress(InetAddress.getByName(host), port));
        TransportClient client = new PreBuiltTransportClient(settings)
                .addTransportAddress(new TransportAddress(InetAddress.getByAddress(host,ip), port));

the IP address of the remote machine is 10.169.149.134. I have tried both the approaches, one with String host and another with byte but always I get the following error.

Exception in thread "main" NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{gYmqDwCtS0qTBNpeYvIv_w}{10.169.149.134}{10.169.149.134:9300}]]
	at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:347)
	at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:245)
	at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:60)
	at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:378)
	at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:405)
	at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:394)
	at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:46)
	at UMPHeartbeatGenerator.generateEvent(UMPHeartbeatGenerator.java:181)
	at UMPHeartbeatGenerator.main(UMPHeartbeatGenerator.java:59)

Everything works fine when I use localhost to connect to a locally running Elasticsearch. What can be the problem ?

I am also including the curl output which I get from the remote machine when I run
curl http://127.0.0.1:9200/

{
"name" : "9SJkeO7",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "NIira-fsSZyof_Tsb7ckUw",
"version" : {
"number" : "6.3.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "424e937",
"build_date" : "2018-06-11T23:38:03.357887Z",
"build_snapshot" : false,
"lucene_version" : "7.3.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

So the name of the cluster is definitely "docker-cluster". Even if I don't include it in the settings the same error appears. Its a 2 node cluster in production mode.

Thanks
Pritom

What is the output of:

curl http://10.169.149.134:9200/

{
"name" : "9SJkeO7",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "NIira-fsSZyof_Tsb7ckUw",
"version" : {
"number" : "6.3.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "424e937",
"build_date" : "2018-06-11T23:38:03.357887Z",
"build_snapshot" : false,
"lucene_version" : "7.3.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

Blockquote

May be your port 9300 is not exposed.

Anyway, I'd recommend using the High Level Rest client instead. It uses port 9200 and is the preferred way to connect with Java to your elasticsearch cluster.

in client test the 9300 port with telnet ,
as this the network is not ok.

Thank you @dadoonet for your response. What do you mean by High level Rest Client ? I am sorry I am new to this.

Thanks
Pritom

You are right @zqc0512. The telnet test shows that the port in not accessible.

See https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/index.html

Thanks for the suggestion. I replaced the codes with High level rest client

RestHighLevelClient client = new RestHighLevelClient(
                RestClient.builder(new HttpHost("10.169.149.134", 9200, "http"))
                    .setRequestConfigCallback(new RestClientBuilder.RequestConfigCallback() {
                        public RequestConfig.Builder customizeRequestConfig(RequestConfig.Builder requestConfigBuilder) {
                            return requestConfigBuilder.setConnectTimeout(5000)
                                    .setSocketTimeout(100000);
                        }
                    })
            .setMaxRetryTimeoutMillis(100000)
        );
IndexRequest indexRequest = new IndexRequest("event-index", "event-log")
                .source(builder);
IndexResponse indexResponse = client.index(indexRequest);

But this time I see the following error after inserting two records ....

Exception in thread "main" ElasticsearchStatusException[Elasticsearch exception [type=unavailable_shards_exception, reason=[event-index][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[event-index][0]] containing [index {[event-index][event-log][SHo5bGQBSnIrxdlC0Dps], source[{"alert_type":"EVENTType1ALERT2","incident_category":"health","event_category":"heartbeat","component_category":"monstor_client","source_timestamp":"2018-07-05T20:54:45.594Z","source_eventtype":"EVENTType1","criticality":2,"dimensions.app_name":"APP10","dimensions.colo":"PHX","dimensions.monstor":true,"dimensions.host":"HOST1","dimensions.pool":"POOL3","dimensions.env":"ENV1"}]}]]]]
	at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:177)
	at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:653)
	at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:628)
	at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:535)
	at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:508)
	at org.elasticsearch.client.RestHighLevelClient.index(RestHighLevelClient.java:348)
	at UMPHeartbeatGenerator.generateEvent(UMPHeartbeatGenerator.java:198)
	at UMPHeartbeatGenerator.main(UMPHeartbeatGenerator.java:75)
	Suppressed: org.elasticsearch.client.ResponseException: method [POST], host [http://10.169.149.134:9200], URI [/event-index/event-log?timeout=1m], status line [HTTP/1.1 503 Service Unavailable]
{"error":{"root_cause":[{"type":"unavailable_shards_exception","reason":"[event-index][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[event-index][0]] containing [index {[event-index][event-log][SHo5bGQBSnIrxdlC0Dps], source[{\"alert_type\":\"EVENTType1ALERT2\",\"incident_category\":\"health\",\"event_category\":\"heartbeat\",\"component_category\":\"monstor_client\",\"source_timestamp\":\"2018-07-05T20:54:45.594Z\",\"source_eventtype\":\"EVENTType1\",\"criticality\":2,\"dimensions.app_name\":\"APP10\",\"dimensions.colo\":\"PHX\",\"dimensions.monstor\":true,\"dimensions.host\":\"HOST1\",\"dimensions.pool\":\"POOL3\",\"dimensions.env\":\"ENV1\"}]}]]"}],"type":"unavailable_shards_exception","reason":"[event-index][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[event-index][0]] containing [index {[event-index][event-log][SHo5bGQBSnIrxdlC0Dps], source[{\"alert_type\":\"EVENTType1ALERT2\",\"incident_category\":\"health\",\"event_category\":\"heartbeat\",\"component_category\":\"monstor_client\",\"source_timestamp\":\"2018-07-05T20:54:45.594Z\",\"source_eventtype\":\"EVENTType1\",\"criticality\":2,\"dimensions.app_name\":\"APP10\",\"dimensions.colo\":\"PHX\",\"dimensions.monstor\":true,\"dimensions.host\":\"HOST1\",\"dimensions.pool\":\"POOL3\",\"dimensions.env\":\"ENV1\"}]}]]"},"status":503}
		at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:705)
		at org.elasticsearch.client.RestClient.performRequest(RestClient.java:235)
		at org.elasticsearch.client.RestClient.performRequest(RestClient.java:198)
		at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:522)
		... 4 more
	Caused by: org.elasticsearch.client.ResponseException: method [POST], host [http://10.169.149.134:9200], URI [/event-index/event-log?timeout=1m], status line [HTTP/1.1 503 Service Unavailable]
		at org.elasticsearch.client.RestClient$1.completed(RestClient.java:377)
		at org.elasticsearch.client.RestClient$1.completed(RestClient.java:366)
		at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:119)
		at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:177)
		at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:436)
		at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:326)
		at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
		at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
		at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
		at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
		at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
		at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
		at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
		at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
		at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
		at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:588)
		at java.lang.Thread.run(Thread.java:748)

When I try the same code in my locally running Elasticsearch everything seems to be fine.

The main difference between running locally and running from another machine is the network.

Can you share your elasticsearch logs (for the remote server)?
Is there any proxy, firewall in the middle?
Does curl http://10.169.149.134:9200 works from the machine the code is running?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.