Hot to send a data from logstash server to a remote elasticserach?

I have a server where I have installed logstash. I am trying to forward a log file from my Linux server to a remote elastic search server. What config do I need to do to achieve this? I have install ELK only 2 days ago. So very new to this product.

You have to specify the elasticserver's host address and ports in the output configurations

An example given below:
output {
elasticsearch {
hosts => ["10.155.175.22:9200"]
sniffing => true
manage_template => false
index => "logstash-%{+YYYY.MM.dd}"

}
}

Hello Joseph,

Thanks for your reply. I still see the following issue in my environment.

Elastic search is running on 10.24.148.252
Losgstash is running on 10.24.236.185

Following is my logstash conf file.
input {
file {
path => "/opt/netcool/IBM/tivoli/netcool/omnibus/log/mttrapd.log"
}
}
output {
elasticsearch {
hosts => ["10.24.148.252:9200"]
index => "nco"
}
stdout { codec => line }
}

I see the following error in the output screen of my logstash server..

[2017-07-27T08:49:17,256][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.24.148.252:9200/]}}
[2017-07-27T08:49:17,266][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://10.24.148.252:9200/, :path=>"/"}
[2017-07-27T08:49:17,433][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#Java::JavaNet::URI:0x65ab51f9, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://10.24.148.252:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2017-07-27T08:49:18,225][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<Errno::EADDRNOTAVAIL: Cannot assign requested address - bind - Cannot assign requested address>, :backtrace=>["org/jruby/ext/socket/RubyTCPServer.java:118:in initialize'", "org/jruby/RubyIO.java:871:innew'", "/opt/splunkforwarder/logstash-5.5.0/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/binder.rb:234:in add_tcp_listener'", "(eval):2:inadd_tcp_listener'", "/opt/splunkforwarder/logstash-5.5.0/logstash-core/lib/logstash/webserver.rb:88:in start_webserver'", "/opt/splunkforwarder/logstash-5.5.0/logstash-core/lib/logstash/webserver.rb:44:inrun'", "org/jruby/RubyRange.java:476:in each'", "org/jruby/RubyEnumerable.java:971:ineach_with_index'", "/opt/splunkforwarder/logstash-5.5.0/logstash-core/lib/logstash/webserver.rb:39:in run'", "/opt/splunkforwarder/logstash-5.5.0/logstash-core/lib/logstash/agent.rb:222:instart_webserver'"]}

Do I have to make any changes to the yml file on elastic search or logstash server?

It seems that there is some issue with the connectivity with ES.

So could you please post the elasticsearch configurations. Especially the host name

try to make it as 0.0.0.0

Hello joseph,

Here is the elasticsearch.yml

======================== Elasticsearch Configuration =========================

NOTE: Elasticsearch comes with reasonable defaults for most settings.

Before you set out to tweak and tune the configuration, make sure you

understand what are you trying to accomplish and the consequences.

The primary way of configuring a node is via this file. This template lists

the most important settings you may want to configure for a production cluster.

Please consult the documentation for further information on configuration options:

https://www.elastic.co/guide/en/elasticsearch/reference/index.html

---------------------------------- Cluster -----------------------------------

Use a descriptive name for your cluster:

#cluster.name: my-application

------------------------------------ Node ------------------------------------

Use a descriptive name for the node:

#node.name: node-1

Add custom attributes to the node:

#node.attr.rack: r1

----------------------------------- Paths ------------------------------------

Path to directory where to store the data (separate multiple locations by comma):

#path.data: /path/to/data

Path to log files:

#path.logs: /path/to/logs

----------------------------------- Memory -----------------------------------

Lock the memory on startup:

#bootstrap.memory_lock: true

Make sure that the heap size is set to about half the memory available

on the system and that the owner of the process is allowed to use this

limit.

Elasticsearch performs poorly when the system is swapping the memory.

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6):

#network.host: 192.168.0.1
network.host: 127.0.0.1
##network.host: 10.24.148.252

Set a custom port for HTTP:

http.port: 9200

##network.publish_host: 10.24.148.252
##transport.tcp.port: 9300

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when new node is started:

The default list of hosts is ["127.0.0.1", "[::1]"]

#discovery.zen.ping.unicast.hosts: ["host1", "host2"]

Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2

  • 1):

#discovery.zen.minimum_master_nodes: 3

For more information, consult the zen discovery module documentation.

---------------------------------- Gateway -----------------------------------

Block initial recovery after a full cluster restart until N nodes are started:

#gateway.recover_after_nodes: 3

For more information, consult the gateway module documentation.

---------------------------------- Various -----------------------------------

Require explicit names when deleting indices:

#action.destructive_requires_name: true

1 Like

Hello Joseph,

Here is the elasticsearch.yml file

elasticsearch.yml

======================== Elasticsearch Configuration =========================

The network.host: 127.0.0.1 setting is the addressing local machine only.

Specify the IP address of your host machine. If it has to be addressed with the specific IP and as localhost, change it to 0.0.0.0 and restart both Elasticsearch and Logstash..

Also check your firewall settings if the above is not working.

Please note to share your configurations in proper format.

Hello Joseph,

First I tried the following setting and it failed

network.host: 0.0.0.0

Next I tried the following setting using the IP address of the host where elastic search is running and it failed to start elastic search.

network.host: 10.24.148.252

The only time elastic search starts is when I have the following setting

network.host: 127.0.0.1

The entire server is in the internal network. There is no firewall in between them.

Thanks,
Kamal Beg
216-471-2387

Is your logstash located in the same machine???

If that is the case,please change the output conf to localhost:9200

My elastic search and kibana are installed on server A and my logstash is installed on server B. Both the servers are in internal network.

I'm trying to send the data from a log file from server B to elastic search on server A.

Can you post the error you are getting in elastic search when giving0.0.0.0

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.