Connecting to remote server


I am new to elasticsearch. I installed elasticsearch and kibana on local machine. All my operations are performed properly on local machine. I want to access remote machine/server on which elasticserver is running. But I am getting exception - NoNodeAvailableException. I didn't know what's wrong. Can you please let me know what changes need to be done in elasticsearch.yml file present on remote machine, on which es server is running? Along with these, any other updations required please let me know.



You should read that part of the documentation:

Specifically for your question:

Are you trying to use the Java TransportClient by any chance?


Thank you for your response.
No, I am using Java Client instead.

Whenever I make changes in yml file, ie,, elasticserver throws exception

Please don't post images of text as they are hard to read, may not display correctly for everyone, and are not searchable.

Instead, paste the text and format it with </> icon or pairs of triple backticks (```), and check the preview window to make sure it's properly formatted before posting it. This makes it more likely that your question will receive a useful answer.

It would be great if you could update your post to solve this.

Please share your elasticsearch.yml file as well.

Following is the exception occurred.

[2019-10-01T06:24:04,048][INFO ][o.e.n.Node               ] [ip-172-22-31-100.ap-south-1.compute.internal] initialized
[2019-10-01T06:24:04,048][INFO ][o.e.n.Node               ] [ip-172-22-31-100.ap-south-1.compute.internal] starting ...
[2019-10-01T06:24:04,169][INFO ][o.e.t.TransportService   ] [ip-172-22-31-100.ap-south-1.compute.internal] publish_address {}, bound_addresses {}
[2019-10-01T06:24:04,194][WARN ][o.e.b.BootstrapChecks    ] [ip-172-22-31-100.ap-south-1.compute.internal] the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
[2019-10-01T06:24:04,198][INFO ][o.e.c.c.Coordinator      ] [ip-172-22-31-100.ap-south-1.compute.internal] cluster UUID [JzsKTGz2T_698rR7Fnx-Ww]
[2019-10-01T06:24:04,206][INFO ][o.e.c.c.ClusterBootstrapService] [ip-172-22-31-100.ap-south-1.compute.internal] no discovery configuration found, will perform best-effort cluster bootstrapping after [3s] unless existing master is discovered
[2019-10-01T06:24:04,298][INFO ][o.e.c.s.MasterService    ] [ip-172-22-31-100.ap-south-1.compute.internal] elected-as-master ([1] nodes joined)[{ip-172-22-31-100.ap-south-1.compute.internal}{HlXc_sI8Sm-Q5nPo4bYbog}{W2oepLs1R1S1Dgg6QqIQIQ}{}{}{dim}{ml.machine_memory=66714533888, xpack.installed=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 9, version: 78, reason: master node changed {previous [], current [{ip-172-22-31-100.ap-south-1.compute.internal}{HlXc_sI8Sm-Q5nPo4bYbog}{W2oepLs1R1S1Dgg6QqIQIQ}{}{}{dim}{ml.machine_memory=66714533888, xpack.installed=true, ml.max_open_jobs=20}]}
[2019-10-01T06:24:04,362][INFO ][o.e.c.s.ClusterApplierService] [ip-172-22-31-100.ap-south-1.compute.internal] master node changed {previous [], current [{ip-172-22-31-100.ap-south-1.compute.internal}{HlXc_sI8Sm-Q5nPo4bYbog}{W2oepLs1R1S1Dgg6QqIQIQ}{}{}{dim}{ml.machine_memory=66714533888, xpack.installed=true, ml.max_open_jobs=20}]}, term: 9, version: 78, reason: Publication{term=9, version=78}
[2019-10-01T06:24:04,433][INFO ][o.e.h.AbstractHttpServerTransport] [ip-172-22-31-100.ap-south-1.compute.internal] publish_address {}, bound_addresses {}
[2019-10-01T06:24:04,433][INFO ][o.e.n.Node               ] [ip-172-22-31-100.ap-south-1.compute.internal] started
[2019-10-01T06:24:04,646][INFO ][o.e.l.LicenseService     ] [ip-172-22-31-100.ap-south-1.compute.internal] license [6776d5be-777b-40f4-84ea-b2986bf9de6c] mode [basic] - valid
[2019-10-01T06:24:04,647][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [ip-172-22-31-100.ap-south-1.compute.internal] Active license is now [BASIC]; Security is disabled
[2019-10-01T06:24:04,654][INFO ][o.e.g.GatewayService     ] [ip-172-22-31-100.ap-south-1.compute.internal] recovered [3] indices into cluster_state
[2019-10-01T06:24:05,015][INFO ][o.e.c.r.a.AllocationService] [ip-172-22-31-100.ap-south-1.compute.internal] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana_1][0], [.kibana_task_manager][0]] ...]).
[2019-10-01T06:24:05,728][INFO ][o.e.c.m.MetaDataIndexTemplateService] [ip-172-22-31-100.ap-south-1.compute.internal] adding template [.management-beats] for index patterns [.management-beats]
[2019-10-01T11:06:27,420][INFO ][o.e.n.Node               ] [ip-172-22-31-100.ap-south-1.compute.internal] stopping ...
[2019-10-01T11:06:27,434][INFO ][o.e.x.w.WatcherService   ] [ip-172-22-31-100.ap-south-1.compute.internal] stopping watch service, reason [shutdown initiated]
[2019-10-01T11:06:27,851][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [ip-172-22-31-100.ap-south-1.compute.internal] [controller/3436] [] Ml controller exiting
[2019-10-01T11:06:27,852][INFO ][o.e.x.m.p.NativeController] [ip-172-22-31-100.ap-south-1.compute.internal] Native controller process has stopped - no new native processes can be started
[2019-10-01T11:06:27,909][INFO ][o.e.n.Node               ] [ip-172-22-31-100.ap-south-1.compute.internal] stopped
[2019-10-01T11:06:27,909][INFO ][o.e.n.Node               ] [ip-172-22-31-100.ap-south-1.compute.internal] closing ...
[2019-10-01T11:06:27,923][INFO ][o.e.n.Node               ] [ip-172-22-31-100.ap-south-1.compute.internal] closed

Following is the content of elasticsearch.yml file

# ======================== Elasticsearch Configuration =========================
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
# Please consult the documentation for further information on configuration options:
# ---------------------------------- Cluster -----------------------------------
# Use a descriptive name for your cluster:
# dspimes my-application
# ------------------------------------ Node ------------------------------------
# Use a descriptive name for the node:
# node-1
# Add custom attributes to the node:
#node.attr.rack: r1
# ----------------------------------- Paths ------------------------------------
# Path to directory where to store the data (separate multiple locations by comma):
# /var/lib/elasticsearch
# Path to log files:
path.logs: /var/log/elasticsearch
# ----------------------------------- Memory -----------------------------------
# Lock the memory on startup:
#bootstrap.memory_lock: true
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
# Elasticsearch performs poorly when the system is swapping the memory.
# ---------------------------------- Network -----------------------------------
# Set the bind address to a specific IP (IPv4 or IPv6):
# Set a custom port for HTTP:
http.port: 9300
#http.port: 9200
# For more information, consult the network module documentation.
# --------------------------------- Discovery ----------------------------------
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["", "[::1]"]
#discovery.seed_hosts: ["host1", "host2"]
# Bootstrap the cluster using an initial set of master-eligible nodes:
#cluster.initial_master_nodes: ["node-1", "node-2"]
# For more information, consult the discovery and cluster formation module documentation.
# ---------------------------------- Gateway -----------------------------------
# Block initial recovery after a full cluster restart until N nodes are started:
#gateway.recover_after_nodes: 3
# For more information, consult the gateway module documentation.
# ---------------------------------- Various -----------------------------------
# Require explicit names when deleting indices:
#action.destructive_requires_name: true

#------------------------------ Manually added parameters---------------------------------- localhost
#transport.tcp.port: 9300 true
#Set to true to enable Elasticsearch security features on the node true
#set this to false to disable support for the default "changeme" password. true
# The username (principal) of the anonymous user. Defaults to _es_anonymous_user.
#ssl.certificate: /postgres/ebiz_pg_data/dspim.crt
#Path to a PEM encoded file containing the certificate (or certificate chain) that will be presented to clients when they connect.
#ssl.verification_mode: certificate
# true
#Used to enable or disable TLS/SSL. The default is false

Please format your code, logs or configuration files using </> icon as explained in this guide and not the citation button. It will make your post more readable.

Or use markdown style like:


This is the icon to use if you are not using markdown format:

There's a live preview panel for exactly this reasons.

Lots of people read these forums, and many of them will simply skip over a post that is difficult to read, because it's just too large an investment of their time to try and follow a wall of badly formatted text.
If your goal is to get an answer to your questions, it's in your interest to make it as easy to read and understand as possible.

I did it for you but next time, please do it.

Why did you change this?

http.port: 9300

yes sure.

I saw on stackoverflow, to set http.port : 9300

But I tried for 9200 also, still not working.

Keep the default value. Just change for now.
And share the logs you are getting when doing that.

On changing , server is working properly, ie, getting no logs.
But I am getting exception while retrieving data which I inserted through kibana.

Here's my code

QueryBuilder query = matchAllQuery();
SearchHit[] hits = client.prepareSearch("demo1").setQuery(query)

Here's my exception

Exception in retrieve : NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{-XxHPfRAT02uAvvIlo6rQQ}{}{}]]

If I understood your issue, you are trying to access ES from a remote machine, if so, please try the following configuration in your config file elasticsearch.yml :
discovery.seed_hosts: [""]
cluster.initial_master_nodes: ["node-1", "node-2"]

Then in you kibana.yml file set the following configuration :
elasticsearch.hosts: ["http://localhost:9200"]


Thanks for your response.
I updated above mentioned values in both yml files present at server machine, still the same exception occurred on my machine.

When did you get the exception exactly ? when running ES or when running Kibana ?

Before the following is done take a back up of originial Elasticsearch.yml file.

Please do the following changes in ES.yml

Under cluster section add the following:
node.master: true (if this is a single node cluster if multi node cluster add this to whatever node which you want to be master node)

Under Node section:
edit somename you prefer

Under Network section:
Use the default config you dont even need to uncomment those.

Under Discovery section Edit the values for the following:
discover.seed_hosts: [ "hostname" ] (hostname is the servername---make sure there is a space between quotes and bracket)
cluster.initial_master_nodes: [ "nodename" ] (nodename is whatever you gave under node section in ES.yml file. spaces between bracket and quotes apply here too.)

Make sure you restart Elasticsearch everytime you make changes in Elasticsearch.yml elasticsearch have to be restarted for the changes to effect.

ES and Kibana are running properly on server machine. Exception is occurred on my machine while executing below code.

QueryBuilder query = matchAllQuery();

SearchHit[] hits = client.prepareSearch("demo1").setQuery(query)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.