How to enable remote publish on Elasticsearch 5.2 Installation?

Hi,

I have one server which has MYSQL and Logstash 5.2 install and another server on which KIbana and Elasticsearch 5.2 have been installed.

The problem I am facing right now is that Logstash fails to add indices or update them.

I am using following settings in elasticsearch.yml

cluster.name: mycluster
node.name:  testnode1
bootstrap.memory_lock: true
network.host: locahost
http.port: 9200

Settings in logstash.conf

input {
  jdbc {
    jdbc_connection_string => "jdbc:mysql://localhost:3306/edxapp"
    jdbc_user => "mysqluser"
    jdbc_password => "mysqlpass"
    jdbc_driver_library => "/home/ubuntu/mysql-connector-java-5.1.40-bin.jar"
    jdbc_driver_class => "com.mysql.jdbc.Driver"
    statement => "SELECT * from test_table"
    add_field => {"type" => "index-test"}
    schedule => "* * * * *"
    jdbc_paging_enabled => "true"
    }
}
filter {
  if [type] == "index-test" {
    grok {
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
      add_field => [ "index", "tta_mobile_analytics" ]

    }
  }
  uuid {
    target => "@uuid"
    overwrite => true
  }
  fingerprint {
    source => ["created"]
    target => "fingerprint"
    key => "78787878"
    method => "SHA1"
    concatenate_sources => true
  }
}
output {
  if [type] == "index-test" {
      elasticsearch {
        hosts => ["ES_IP:9200"]
        index => "index-test"
        document_type => "%{type}"
        document_id => "%{created}"
    }  
  }
}

When tested Configuration was ok. But it fails to connect with elasticsearch.
Any setting that I could be missing?

locahost? I guess it's a typo which means that you did not really copy and paste your settings right?

In any case, if you bound to locahost then you need to do

hosts => ["locahost:9200"]

@dadoonet Yes, it's a typo.

In which file should I put this setting in?

Logstash is installed on another server so in output conf I'll have to give Elasticsearch server's IP address like below, right?

hosts =>  ["ES_IP:9200"]

BTW Elasticsearch status is yellow. I have gone through documentations and tutorials but I couldn't find anything to make that status green.

Read https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html#network.host

Following is the log for elasticsearch.

[2017-03-20T07:11:05,114][INFO ][o.e.n.Node               ] [node-tta] stopping ...
[2017-03-20T07:11:05,141][INFO ][o.e.n.Node               ] [node-tta] stopped
[2017-03-20T07:11:05,141][INFO ][o.e.n.Node               ] [node-tta] closing ...
[2017-03-20T07:11:05,148][INFO ][o.e.n.Node               ] [node-tta] closed
[2017-03-20T07:11:06,327][WARN ][o.e.b.JNANatives         ] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
[2017-03-20T07:11:06,328][WARN ][o.e.b.JNANatives         ] This can result in part of the JVM being swapped out.
[2017-03-20T07:11:06,328][WARN ][o.e.b.JNANatives         ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2017-03-20T07:11:06,328][WARN ][o.e.b.JNANatives         ] These can be adjusted by modifying /etc/security/limits.conf, for example: 
	# allow user 'elasticsearch' mlockall
	elasticsearch soft memlock unlimited
	elasticsearch hard memlock unlimited
[2017-03-20T07:11:06,328][WARN ][o.e.b.JNANatives         ] If you are logged in interactively, you will have to re-login for the new limits to take effect.
[2017-03-20T07:11:06,410][INFO ][o.e.n.Node               ] [node-tta] initializing ...
[2017-03-20T07:11:06,488][INFO ][o.e.e.NodeEnvironment    ] [node-tta] using [1] data paths, mounts [[/ (/dev/xvda1)]], net usable_space [21.2gb], net total_space [29.3gb], spins? [no], types [ext4]
[2017-03-20T07:11:06,489][INFO ][o.e.e.NodeEnvironment    ] [node-tta] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-03-20T07:11:06,493][INFO ][o.e.n.Node               ] [node-tta] node name [node-tta], node ID [ubat_-tiS5q7E2y8yCZysQ]
[2017-03-20T07:11:06,495][INFO ][o.e.n.Node               ] [node-tta] version[5.2.2], pid[3546], build[f9d9b74/2017-02-24T17:26:45.835Z], OS[Linux/3.13.0-105-generic/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_121/25.121-b13]
[2017-03-20T07:11:07,281][INFO ][o.e.p.PluginsService     ] [node-tta] loaded module [aggs-matrix-stats]
[2017-03-20T07:11:07,281][INFO ][o.e.p.PluginsService     ] [node-tta] loaded module [ingest-common]
[2017-03-20T07:11:07,281][INFO ][o.e.p.PluginsService     ] [node-tta] loaded module [lang-expression]
[2017-03-20T07:11:07,281][INFO ][o.e.p.PluginsService     ] [node-tta] loaded module [lang-groovy]
[2017-03-20T07:11:07,281][INFO ][o.e.p.PluginsService     ] [node-tta] loaded module [lang-mustache]
[2017-03-20T07:11:07,281][INFO ][o.e.p.PluginsService     ] [node-tta] loaded module [lang-painless]
[2017-03-20T07:11:07,282][INFO ][o.e.p.PluginsService     ] [node-tta] loaded module [percolator]
[2017-03-20T07:11:07,282][INFO ][o.e.p.PluginsService     ] [node-tta] loaded module [reindex]
[2017-03-20T07:11:07,282][INFO ][o.e.p.PluginsService     ] [node-tta] loaded module [transport-netty3]
[2017-03-20T07:11:07,282][INFO ][o.e.p.PluginsService     ] [node-tta] loaded module [transport-netty4]
[2017-03-20T07:11:07,283][INFO ][o.e.p.PluginsService     ] [node-tta] no plugins loaded
[2017-03-20T07:11:09,579][INFO ][o.e.n.Node               ] [node-tta] initialized
[2017-03-20T07:11:09,579][INFO ][o.e.n.Node               ] [node-tta] starting ...
[2017-03-20T07:11:09,657][WARN ][i.n.u.i.MacAddressUtil   ] Failed to find a usable hardware address from the network interfaces; using random bytes: 0a:41:fa:cb:0d:22:65:6b
[2017-03-20T07:11:09,721][INFO ][o.e.t.TransportService   ] [node-tta] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2017-03-20T07:11:09,729][WARN ][o.e.b.BootstrapChecks    ] [node-tta] memory locking requested for elasticsearch process but memory is not locked
[2017-03-20T07:11:12,783][INFO ][o.e.c.s.ClusterService   ] [node-tta] new_master {node-tta}{ubat_-tiS5q7E2y8yCZysQ}{irqomIONRPW2CQAyVyR9lQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-03-20T07:11:12,825][INFO ][o.e.h.HttpServer         ] [node-tta] publish_address {x.x.x.x:9200}, bound_addresses {[::1]:9200}
[2017-03-20T07:11:12,825][INFO ][o.e.n.Node               ] [node-tta] started
[2017-03-20T07:11:12,962][INFO ][o.e.g.GatewayService     ] [node-tta] recovered [1] indices into cluster_state
[2017-03-20T07:11:13,706][INFO ][o.e.c.r.a.AllocationService] [node-tta] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).

Can you run:

curl x.x.x.x:9200

From your logstash machine?

It gives the following error:

curl: (7) Failed to connect to x.x.x.x port 9200: Connection refused

Did you set network.host: x.x.x.x in elasticsearch.yml?

Yes I did.

but like this

http.bind_host: ::1
http.publish_host: x.x.x.x

Why this?

Because Logstash is a separate server and for it to index the logs in elasticsearch, elasticsearch should have a non loopback address right?

My question was more : "why you choose to have different address for bind and publish?"

But FWIW, bind should use the right IP.

I was thinking the same thing and I did change both of them to the same IP but the problem persists.

Can you share the logs after you did that change?

Caused by: java.net.BindException: Cannot assign requested address
	at sun.nio.ch.Net.bind0(Native Method) ~[?:?]
	at sun.nio.ch.Net.bind(Net.java:433) ~[?:?]
	at sun.nio.ch.Net.bind(Net.java:425) ~[?:?]
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) ~[?:?]
	at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:127) ~[?:?]
	at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:554) ~[?:?]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1258) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:502) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:487) ~[?:?]
	at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:980) ~[?:?]
	at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:250) ~[?:?]
	at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:363) ~[?:?]
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) ~[?:?]
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:445) ~[?:?]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) ~[?:?]
	at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_121]
[2017-03-21T05:09:24,506][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [node-tta] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: BindHttpException[Failed to bind to [9200]]; nested: BindException[Cannot assign requested address];
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-5.2.2.jar:5.2.2]
	at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-5.2.2.jar:5.2.2]
	at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54) ~[elasticsearch-5.2.2.jar:5.2.2]
	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) ~[elasticsearch-5.2.2.jar:5.2.2]
	at org.elasticsearch.cli.Command.main(Command.java:88) ~[elasticsearch-5.2.2.jar:5.2.2]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89) ~[elasticsearch-5.2.2.jar:5.2.2]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82) ~[elasticsearch-5.2.2.jar:5.2.2]
Caused by: org.elasticsearch.http.BindHttpException: Failed to bind to [9200]
	at org.elasticsearch.http.netty4.Netty4HttpServerTransport.bindAddress(Netty4HttpServerTransport.java:453) ~[?:?]
	at org.elasticsearch.http.netty4.Netty4HttpServerTransport.createBoundHttpAddress(Netty4HttpServerTransport.java:354) ~[?:?]
	at org.elasticsearch.http.netty4.Netty4HttpServerTransport.doStart(Netty4HttpServerTransport.java:334) ~[?:?]
	at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:69) ~[elasticsearch-5.2.2.jar:5.2.2]
	at org.elasticsearch.http.HttpServer.doStart(HttpServer.java:76) ~[elasticsearch-5.2.2.jar:5.2.2]
	at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:69) ~[elasticsearch-5.2.2.jar:5.2.2]
	at org.elasticsearch.node.Node.start(Node.java:643) ~[elasticsearch-5.2.2.jar:5.2.2]
	at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:261) ~[elasticsearch-5.2.2.jar:5.2.2]
	at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:335) ~[elasticsearch-5.2.2.jar:5.2.2]
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-5.2.2.jar:5.2.2]
	... 6 more
Caused by: java.net.BindException: Cannot assign requested address
	at sun.nio.ch.Net.bind0(Native Method) ~[?:?]
	at sun.nio.ch.Net.bind(Net.java:433) ~[?:?]
	at sun.nio.ch.Net.bind(Net.java:425) ~[?:?]
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) ~[?:?]
	at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:127) ~[?:?]
	at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:554) ~[?:?]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1258) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:502) ~[?:?]
	at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:487) ~[?:?]
	at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:980) ~[?:?]
	at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:250) ~[?:?]
	at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:363) ~[?:?]
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) ~[?:?]
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403) ~[?:?]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:445) ~[?:?]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) ~[?:?]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
[2017-03-21T05:09:24,636][INFO ][o.e.g.GatewayService     ] [node-tta] recovered [1] indices into cluster_state
[2017-03-21T05:09:24,806][INFO ][o.e.c.r.a.AllocationService] [node-tta] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
[2017-03-21T05:09:25,477][INFO ][o.e.n.Node               ] [node-tta] stopping ...
[2017-03-21T05:09:25,498][INFO ][o.e.n.Node               ] [node-tta] stopped
[2017-03-21T05:09:25,498][INFO ][o.e.n.Node               ] [node-tta] closing ...
[2017-03-21T05:09:25,505][INFO ][o.e.n.Node               ] [node-tta] closed

I have assigned the private IP of the AWS EC2 server on which elasticsearch is installed in.

I don't understand why this is failing.

Can you share the exact elasticsearch.yml file you used to get those logs? ^^^

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.