How to access Elasticsearch on a remote machine

I am using Elasticsearch/Kibana on an Azure VM and while it works perfectly fine there, I am running into a problem when I try to access it through my remote machine.

I have a python code that runs and pushes data onto elasticsearch and I am encountering a barrier while connecting to port 9200.

As per a few articles available here and elsewhere, I even reconfigured the network.host: 0.0.0.0 and fired ES with no luck.

It might be a firewall or ACL that is part of the Azure setup, so that would be the best place to start.

I was able to do that previously i.e Connect on a remote machine. I updated Elastic Search to a newer version and since then the problem has prevailed. My firewalls are off as well.

What are elasticsearch logs please?

[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [aggs-matrix-stats]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [analysis-common]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [ingest-common]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [ingest-geoip]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [ingest-user-agent]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [lang-expression]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [lang-mustache]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [lang-painless]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [mapper-extras]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [parent-join]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [percolator]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [rank-eval]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [reindex]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [repository-url]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [transport-netty4]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [x-pack-ccr]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [x-pack-core]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [x-pack-deprecation]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [x-pack-graph]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [x-pack-ilm]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [x-pack-logstash]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [x-pack-ml]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [x-pack-monitoring]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [x-pack-rollup]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [x-pack-security]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [x-pack-sql]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] loaded module [x-pack-watcher]
[2019-05-22T08:00:24,136][INFO ][o.e.p.PluginsService     ] [nhazvbelk01] no plugins loaded
[2019-05-22T08:00:30,277][INFO ][o.e.x.s.a.s.FileRolesStore] [nhazvbelk01] parsed [0] roles from file [C:\elasticsearch\config\roles.yml]
[2019-05-22T08:00:31,949][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [nhazvbelk01] [controller/4572] [Main.cc@109] controller (64 bit): Version 7.0.1 (Build 6a88928693d862) Copyright (c) 2019 Elasticsearch BV
[2019-05-22T08:00:32,949][DEBUG][o.e.a.ActionModule       ] [nhazvbelk01] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2019-05-22T08:00:33,402][INFO ][o.e.d.DiscoveryModule    ] [nhazvbelk01] using discovery type [zen] and seed hosts providers [settings]
[2019-05-22T08:00:34,542][INFO ][o.e.n.Node               ] [nhazvbelk01] initialized
[2019-05-22T08:00:34,542][INFO ][o.e.n.Node               ] [nhazvbelk01] starting ...
[2019-05-22T08:00:35,246][INFO ][o.e.t.TransportService   ] [nhazvbelk01] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2019-05-22T08:00:35,292][WARN ][o.e.b.BootstrapChecks    ] [nhazvbelk01] the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
[2019-05-22T08:00:35,324][INFO ][o.e.c.c.ClusterBootstrapService] [nhazvbelk01] no discovery configuration found, will perform best-effort cluster bootstrapping after [3s] unless existing master is discovered
[2019-05-22T08:00:35,496][INFO ][o.e.c.s.MasterService    ] [nhazvbelk01] elected-as-master ([1] nodes joined)[{nhazvbelk01}{f1haieaURM-OOpFarCLuIQ}{KwMuCOy1TlOEFm-_QeFHJQ}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=34359267328, xpack.installed=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 19, version: 192, reason: master node changed {previous [], current [{nhazvbelk01}{f1haieaURM-OOpFarCLuIQ}{KwMuCOy1TlOEFm-_QeFHJQ}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=34359267328, xpack.installed=true, ml.max_open_jobs=20}]}
[2019-05-22T08:00:35,777][INFO ][o.e.c.s.ClusterApplierService] [nhazvbelk01] master node changed {previous [], current [{nhazvbelk01}{f1haieaURM-OOpFarCLuIQ}{KwMuCOy1TlOEFm-_QeFHJQ}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=34359267328, xpack.installed=true, ml.max_open_jobs=20}]}, term: 19, version: 192, reason: Publication{term=19, version=192}
[2019-05-22T08:00:36,042][INFO ][o.e.h.AbstractHttpServerTransport] [nhazvbelk01] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2019-05-22T08:00:36,042][INFO ][o.e.n.Node               ] [nhazvbelk01] started
[2019-05-22T08:00:36,402][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [nhazvbelk01] Failed to clear cache for realms [[]]
[2019-05-22T08:00:36,464][INFO ][o.e.l.LicenseService     ] [nhazvbelk01] license [cc894d2f-2029-4f4f-b980-bc05d8f875d1] mode [basic] - valid
[2019-05-22T08:00:36,480][INFO ][o.e.g.GatewayService     ] [nhazvbelk01] recovered [3] indices into cluster_state
[2019-05-22T08:00:47,529][INFO ][o.e.c.r.a.AllocationService] [nhazvbelk01] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[nppes][0]] ...]).
[2019-05-22T08:00:50,227][INFO ][o.e.c.m.MetaDataIndexTemplateService] [nhazvbelk01] adding template [.management-beats] for index patterns [.management-beats]
[2019-05-22T12:32:47,372][INFO ][o.e.c.m.MetaDataIndexTemplateService] [nhazvbelk01] adding template [.management-beats] for index patterns [.management-beats]
[2019-05-22T12:39:26,811][INFO ][o.e.c.m.MetaDataIndexTemplateService] [nhazvbelk01] adding template [.management-beats] for index patterns [.management-beats]

Am i doing something wrong here?

Please format your code, logs or configuration files using </> icon as explained in this guide and not the citation button. It will make your post more readable.

Or use markdown style like:

```
CODE
```

This is the icon to use if you are not using markdown format:

There's a live preview panel for exactly this reasons.

Lots of people read these forums, and many of them will simply skip over a post that is difficult to read, because it's just too large an investment of their time to try and follow a wall of badly formatted text.
If your goal is to get an answer to your questions, it's in your interest to make it as easy to read and understand as possible.

I updated your post but think about it next time.

Here the problem is here:

[2019-05-22T08:00:36,042][INFO ][o.e.h.AbstractHttpServerTransport] [nhazvbelk01] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}

It's only listening to 127.0.0.1 which is only accessible from the machine itself.
You need to change network.host to the IP Address of the network card you have I guess.

I did that. No luck! I have attached a screenshot from my .yml file. Is something wrong there?

Can you share the logs after you did that?

[2019-05-23T10:02:29,196][INFO ][o.e.x.s.a.s.FileRolesStore] [nhazvbelk01] parsed [0] roles from file [C:\elasticsearch\config\roles.yml]
[2019-05-23T10:02:30,157][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [nhazvbelk01] [controller/7664] [Main.cc@109] controller (64 bit): Version 7.1.0 (Build a8ee6de8087169) Copyright (c) 2019 Elasticsearch BV
[2019-05-23T10:02:30,657][DEBUG][o.e.a.ActionModule       ] [nhazvbelk01] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2019-05-23T10:02:30,954][INFO ][o.e.d.DiscoveryModule    ] [nhazvbelk01] using discovery type [zen] and seed hosts providers [settings]
[2019-05-23T10:02:31,798][INFO ][o.e.n.Node               ] [nhazvbelk01] initialized
[2019-05-23T10:02:31,798][INFO ][o.e.n.Node               ] [nhazvbelk01] starting ...
[2019-05-23T10:02:32,126][INFO ][o.e.t.TransportService   ] [nhazvbelk01] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2019-05-23T10:02:32,142][WARN ][o.e.b.BootstrapChecks    ] [nhazvbelk01] the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
[2019-05-23T10:02:32,157][INFO ][o.e.c.c.ClusterBootstrapService] [nhazvbelk01] no discovery configuration found, will perform best-effort cluster bootstrapping after [3s] unless existing master is discovered
[2019-05-23T10:02:35,173][INFO ][o.e.c.c.Coordinator      ] [nhazvbelk01] setting initial configuration to VotingConfiguration{3K_AK9JRQrCdK1IaN2xfUA}
[2019-05-23T10:02:35,298][INFO ][o.e.c.s.MasterService    ] [nhazvbelk01] elected-as-master ([1] nodes joined)[{nhazvbelk01}{3K_AK9JRQrCdK1IaN2xfUA}{Vkpvs-MfQDuktp8Q85-w-Q}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=34359267328, xpack.installed=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 1, version: 1, reason: master node changed {previous [], current [{nhazvbelk01}{3K_AK9JRQrCdK1IaN2xfUA}{Vkpvs-MfQDuktp8Q85-w-Q}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=34359267328, xpack.installed=true, ml.max_open_jobs=20}]}
[2019-05-23T10:02:35,345][INFO ][o.e.c.c.CoordinationState] [nhazvbelk01] cluster UUID set to [Pl7JX4T4Qp-0pI8ixfIdcw]
[2019-05-23T10:02:35,376][INFO ][o.e.c.s.ClusterApplierService] [nhazvbelk01] master node changed {previous [], current [{nhazvbelk01}{3K_AK9JRQrCdK1IaN2xfUA}{Vkpvs-MfQDuktp8Q85-w-Q}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=34359267328, xpack.installed=true, ml.max_open_jobs=20}]}, term: 1, version: 1, reason: Publication{term=1, version=1}
[2019-05-23T10:02:35,439][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [nhazvbelk01] Failed to clear cache for realms [[]]
[2019-05-23T10:02:35,501][INFO ][o.e.g.GatewayService     ] [nhazvbelk01] recovered [0] indices into cluster_state
[2019-05-23T10:02:35,595][INFO ][o.e.h.AbstractHttpServerTransport] [nhazvbelk01] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2019-05-23T10:02:35,595][INFO ][o.e.n.Node               ] [nhazvbelk01] started
[2019-05-23T10:02:35,673][INFO ][o.e.c.m.MetaDataIndexTemplateService] [nhazvbelk01] adding template [.watches] for index patterns [.watches*]
[2019-05-23T10:02:35,720][INFO ][o.e.c.m.MetaDataIndexTemplateService] [nhazvbelk01] adding template [.triggered_watches] for index patterns [.triggered_watches*]
[2019-05-23T10:02:35,798][INFO ][o.e.c.m.MetaDataIndexTemplateService] [nhazvbelk01] adding template [.watch-history-9] for index patterns [.watcher-history-9*]
[2019-05-23T10:02:35,845][INFO ][o.e.c.m.MetaDataIndexTemplateService] [nhazvbelk01] adding template [.monitoring-logstash] for index patterns [.monitoring-logstash-7-*]
[2019-05-23T10:02:35,907][INFO ][o.e.c.m.MetaDataIndexTemplateService] [nhazvbelk01] adding template [.monitoring-es] for index patterns [.monitoring-es-7-*]
[2019-05-23T10:02:35,970][INFO ][o.e.c.m.MetaDataIndexTemplateService] [nhazvbelk01] adding template [.monitoring-beats] for index patterns [.monitoring-beats-7-*]
[2019-05-23T10:02:36,017][INFO ][o.e.c.m.MetaDataIndexTemplateService] [nhazvbelk01] adding template [.monitoring-alerts-7] for index patterns [.monitoring-alerts-7]
[2019-05-23T10:02:36,095][INFO ][o.e.c.m.MetaDataIndexTemplateService] [nhazvbelk01] adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-7-*]
[2019-05-23T10:02:36,142][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [nhazvbelk01] adding index lifecycle policy [watch-history-ilm-policy]
[2019-05-23T10:02:36,340][INFO ][o.e.l.LicenseService     ] [nhazvbelk01] license [dc01db75-f8f7-4ab9-a7dd-b63f9a8d15cc] mode [basic] - valid
[2019-05-23T10:03:29,543][INFO ][o.e.b.Bootstrap          ] [nhazvbelk01] running graceful exit on windows
[2019-05-23T10:03:29,543][INFO ][o.e.n.Node               ] [nhazvbelk01] stopping ...
[2019-05-23T10:03:29,559][INFO ][o.e.x.w.WatcherService   ] [nhazvbelk01] stopping watch service, reason [shutdown initiated]
[2019-05-23T10:03:29,575][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [nhazvbelk01] [controller/7664] [Main.cc@148] Ml controller exiting
[2019-05-23T10:03:29,575][INFO ][o.e.x.m.p.NativeController] [nhazvbelk01] Native controller process has stopped - no new native processes can be started
[2019-05-23T10:03:29,575][INFO ][o.e.n.Node               ] [nhazvbelk01] stopped
[2019-05-23T10:03:29,575][INFO ][o.e.n.Node               ] [nhazvbelk01] closing ...
[2019-05-23T10:03:29,590][INFO ][o.e.n.Node               ] [nhazvbelk01] closed

It doesn't look like you have set network.host properly. Perhaps you can post your config, instead of a screenshot of it.

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

My .yml files looks like this. I am fairly new to Elasticsearch so pardon me if I am making any obvious mistakes. I tried these changes after referring to some articles on this forum.

As I said, don't use 0.0.0.0 but the IP address of your network card instead.

I ran ipconfig on my remote machine and got my ipv4 address as 10.111.158.61. Changed my

network.host:10.111.158.61
http.port:9200

Still no luck...

[2019-05-23T10:58:03,821][INFO ][o.e.n.Node               ] [nhazvbelk01] initialized
[2019-05-23T10:58:03,821][INFO ][o.e.n.Node               ] [nhazvbelk01] starting ...
[2019-05-23T10:58:04,149][ERROR][o.e.b.Bootstrap          ] [nhazvbelk01] Exception
org.elasticsearch.transport.BindTransportException: Failed to bind to [9300-9400]
	at org.elasticsearch.transport.TcpTransport.bindToPort(TcpTransport.java:408) ~
Caused by: java.net.BindException: Cannot assign requested address: bind
	at sun.nio.ch.Net.bind0(Native Method) ~[?:?]
	at sun.nio.ch.Net.bind(Unknown Source) ~[?:?]
	at sun.nio.ch.Net.bind(Unknown Source) ~[?:?]
	at sun.nio.ch.ServerSocketChannelImpl.bind(Unknown Source) ~[?:?]
	at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:130) ~[?:?]
	at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:562) ~[?:?]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1358) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:501) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:486) ~[?:?]
	at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:1019) ~[?:?]
	at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:258) ~[?:?]
	at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:366) ~[?:?]
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) ~[?:?]
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:474) ~[?:?]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909) ~[?:?]
	at java.lang.Thread.run(Unknown Source) [?:1.8.0_211]
[2019-05-23T10:58:04,868][INFO ][o.e.n.Node               ] [nhazvbelk01] stopping ...
[2019-05-23T10:58:04,868][INFO ][o.e.n.Node               ] [nhazvbelk01] stopped
[2019-05-23T10:58:04,868][INFO ][o.e.n.Node               ] [nhazvbelk01] closing ...
[2019-05-23T10:58:04,884][INFO ][o.e.n.Node               ] [nhazvbelk01] closed

The problem is solved.

I passed an initial list of hosts to perform discovery when this node is started.

I added this in my elasticsearch.yml file

discovery.seed_hosts: ["0.0.0.0", "[::]"]
7 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.