Logstash not outputting logs to elasticsearch

Hello,

When I set up Elasticsearch, Logstash, Kibana and Filebeat, I used this tutorial.

Unfortunately, Logstash is not attempting to output to Elasticsearch at the correct IP address. This is shown in the log message below.

{:timestamp=>"2016-02-08T16:27:58.572000-0500", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["http://localhost:9200/"]', but Elasticsearch appears to be unreachable or down!", :client_config=>{:hosts=>["http://localhost:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :error_message=>"Connection refused", :class=>"Manticore::SocketException", :level=>:error}

My configuration files are below:

/etc/elasticsearch/elasticsearch.yml
network.host: PRIVATE_IP_ADDRESS

/opt/logstash/conf.d/logstash.conf

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["PRIVATE_IP_ADDRESS:9200"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

I have the same file at /etc/logstash/conf.d/logstash.conf because I didn't know where to put the logstash configuration file.

When I run curl PRIVATE_IP_ADDRESS:9200, I get the following output

{
  "name" : "Gabriel Summers",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "2.1.1",
    "build_hash" : "40e2c53a6b6c2972b3d13846e450e66f4375bd71",
    "build_timestamp" : "2015-12-15T13:05:55Z",
    "build_snapshot" : false,
    "lucene_version" : "5.3.1"
  },
  "tagline" : "You Know, for Search"
}

How can I configure Logstash to output to the Elasticsearch at PRIVATE_IP_ADDRESS:9200 instead of localhost:92000?

Thank you in advance.

Take the brackets off of:
hosts => "Private_IP_ADDRESS:9200"

Since ES is up and is properly querying results the problem rests in your logstash configuration file. Or unless you have a firewall up and is blocking that specific port. If you think this is a problem use nmap and scan 9200 port and if filtered than you know it is blocked by a firewall or something.

Take the brackets off of:
hosts => "Private_IP_ADDRESS:9200"

No, that's not the problem. The hosts option is an array.

@Gen_Ohta, make sure you don't have other config files in /opt/logstash/conf.d. Logstash will read every file. Starting Logstash with --versbose (or maybe --debug is required) will show exactly which configuration files are read and what they contain.

Thank you both for your replies.

@magnusbaeck there aren't any more config files in the /opt/logstash/conf.d/ directory.

From the /opt/logstash/ directory I ran
bin/logstash -f /opt/logstash/conf.d/logstash.conf --debug and there were no errors in the output.

Could you please tell me how to run my actual Logstash instance in verbose mode to show which config files it's reading and what they contain?

Could you please tell me how to run my actual Logstash instance in verbose mode to show which config files it's reading and what they contain?

It depends on how you run Logstash, but your init script (or similar) probably reads /etc/sysconfig/logstash or /etc/default/logstash, where you should find the LS_OPTS variable (which might be commented out).

Thank you. When I first looked at the /etc/sysconfig/logstash, everything was commented out except the last line. I changed my /etc/sysconfig/logstash to look like the following:

    # Set a home directory
    LS_HOME=/opt/logstash

    # Arguments to pass to logstash agent
    LS_OPTS=""

    # logstash configuration directory
    LS_CONF_DIR=/opt/logstash/conf.d
    
    KILL_ON_STOP_TIMEOUT=0

However I am still getting the same log message after I restart logstash, which is reproduced below.

{:timestamp=>"2016-02-09T10:05:46.847000-0500", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["http://localhost:9200/"]', but Elasticsearch appears to be unreachable or down!", :client_config=>{:hosts=>["http://localhost:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :error_message=>"Connection refused", :class=>"Manticore::SocketException", :level=>:error}

I also ran sudo /etc/rc.d/init.d/logstash configtest and the output was "Configuration OK"

Is there anything else I should change about the /etc/sysconfig/logstash file?

Add --verbose or even --debug to LS_OPTS and start Logstash again. I still suspect you have an extra config file that contains hosts => "localhost" or similar.

I made LS_OPTS="--debug" in the /etc/sysconfig/logstash file. Unfortunately, there was nothing in the logs that said which configuration file Logstash was reading. A sample of the logs from when the Logstash was restarted is below.

{:timestamp=>"2016-02-09T10:38:31.643000-0500", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}
{:timestamp=>"2016-02-09T10:38:32.329000-0500", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["http://localhost:9200/"]', but Elasticsearch appears to be unreachable or down!", :client_config=>{:hosts=>["http://localhost:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :error_message=>"Connection refused", :class=>"Manticore::SocketException", :level=>:error}
{:timestamp=>"2016-02-09T10:38:34.341000-0500", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["http://localhost:9200/"]', but Elasticsearch appears to be unreachable or down!", :client_config=>{:hosts=>["http://localhost:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :error_message=>"Connection refused", :class=>"Manticore::SocketException", :level=>:error}
{:timestamp=>"2016-02-09T10:38:34.796000-0500", :level=>:warn, "INFLIGHT_EVENT_COUNT"=>{"input_to_filter"=>20, "filter_to_output"=>20, "total"=>40}, "STALLING_THREADS"=>{["LogStash::Outputs::ElasticSearch", {"hosts"=>"localhost:9200", "manage_template"=>"false", "index"=>"%{[@metadata][beat]}-%{+YYYY.MM.dd}", "document_type"=>"%{[@metadata][type]}"}]=>[{"thread_id"=>19, "name"=>">output", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.2.0-java/lib/logstash/outputs/elasticsearch/buffer.rb:63:in `synchronize'"}]}}

I ran grep -r "localhost:9200" in my /opt/logstash directory and there are many places in the vendor/bundle/jruby/1.9/gems/ directory where "localhost:9200" is referenced. Does it make any sense to change all of those to use my private ip address?

If not, could you please tell me what else I should try?

It turns out that when I was restarting logstash all of the processes weren't being terminated. My config files weren't the problem.

Here are the commands I ran to solve this:

ps -ef | grep logstash 
sudo kill EACH_PID 
sudo kill -9 PID_THAT_WASNT_KILLED_BEFORE 
sudo /etc/init.d/logstash start

Thank you @magnusbaeck for all of your time and help!