Filebeat not pushing on Logstash,

I am trying to provide logs from filebeat to logstash.

But they are not sent and at server side if I try :
curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : 0.0,
"hits" : [ ]
}
}

I am stuck here I am not sending logs to Elasticsearch only logstash

here is my filebeat.yml for logstash

##########################
logstash:
# The Logstash hosts
hosts: ["192.168.1.236:5044"]
bulk_max_size: 1024
# Number of workers per Logstash host.
#worker: 1

# Set gzip compression level.
#compression_level: 3

# Optional load balance the events between the Logstash hosts
#loadbalance: true

# Optional index name. The default index name depends on the each beat.
# For Packetbeat, the default is set to packetbeat, for Topbeat
# top topbeat and for Filebeat to filebeat.
#index: filebeat

# Optional TLS. By default is off.
tls:
  # List of root certificates for HTTPS server verifications
  #certificate_authorities: ["/etc/pki/root/ca.pem"]
  certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

  # Certificate for TLS client authentication
  #certificate: "/etc/pki/client/cert.pem"

Error :
2: INFO force_close_file is disabled
2016/05/03 15:24:15.503075 prospector.go:142: INFO Starting prospector of type: log
2016/05/03 15:24:15.503130 spooler.go:77: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2016/05/03 15:24:15.503184 log.go:113: INFO Harvester started for file: /var/log/messages
2016/05/03 15:24:15.503197 log.go:113: INFO Harvester started for file: /var/log/secure
2016/05/03 15:24:15.503865 crawler.go:78: INFO All prospectors initialised with 6 states to persist
2016/05/03 15:24:15.503885 registrar.go:87: INFO Starting Registrar
2016/05/03 15:24:15.503904 publish.go:88: INFO Start sending events to output
2016/05/03 15:24:20.525062 single.go:76: INFO Error publishing events (retrying): EOF
2016/05/03 15:24:20.525118 single.go:152: INFO send fail
2016/05/03 15:24:20.525142 single.go:159: INFO backoff retry: 1s


ector.go:132: INFO Set max_backoff duration to 10s
2016/05/03 14:58:01.366484 prospector.go:112: INFO force_close_file is disabled
2016/05/03 14:58:01.366536 prospector.go:142: INFO Starting prospector of type: log
2016/05/03 14:58:01.366624 crawler.go:78: INFO All prospectors initialised with 10 states to persist
2016/05/03 14:58:01.366673 registrar.go:87: INFO Starting Registrar
2016/05/03 14:58:01.366741 publish.go:88: INFO Start sending events to output
2016/05/03 14:58:01.366800 spooler.go:77: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s

Error from two different client both stuck

can you post your logstash conf for filebeat input? Also can you post your full filebeat.yml? what do you see in the logstash logs.

is your certificate expired?

logstash.log

"/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in start_sniffing!'", "org/jruby/RubyKernel.java:1479:inloop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:59:in `start_sniffing!'"], :level=>:error}
{:timestamp=>"2016-05-03T11:56:29.952000-0400", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["http://192.168.1.236:9200/"]', but Elasticsearch appears to be unreachable or down!", :error_message=>"Connection refused", :class=>"Manticore::SocketException", :client_config=>{:hosts=>["http://192.168.1.236:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :level=>:error}
{:timestamp=>"2016-05-03T11:56:30.008000-0400", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["http://192.168.1.236:9200/"]', but Elasticsearch appears to be unreachable or down!", :error_message=>"Connection refused", :class=>"Manticore::SocketException", :client_config=>{:hosts=>["http://192.168.1.236:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :level=>:error}

Logstash input

cat 02-beats-input.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}

1 Like

Lfilebeat.yml is big one so just copied output block
############################# Output ##########################################

Configure what outputs to use when sending the data collected by the beat.

Multiple outputs may be used.

output:

Elasticsearch as output

#elasticsearch:
#hosts: ["localhost:9200"]

# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "admin"
#password: "s3cr3t"

# Number of workers per Elasticsearch host.
#worker: 1

# Optional index name. The default is "filebeat" and generates
# [filebeat-]YYYY.MM.DD keys.
#index: "filebeat"

# A template is used to set the mapping in Elasticsearch
# By default template loading is disabled and no template is loaded.
# These settings can be adjusted to load your own template or overwrite existing ones

# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1

# Boolean that sets if the topology is kept in Elasticsearch. The default is
# false. This option makes sense only for Packetbeat.
#save_topology: false

# The time to live in seconds for the topology information that is stored in
# Elasticsearch. The default is 15 seconds.
#topology_expire: 15

# tls configuration. By default is off.
#tls:
  # List of root certificates for HTTPS server verifications
  #certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for TLS client authentication
  #certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #certificate_key: "/etc/pki/client/cert.key"

  # Controls whether the client verifies server certificates and host name.
  # If insecure is set to true, all server host names and certificates will be
  # accepted. In this mode TLS based connections are susceptible to
  # man-in-the-middle attacks. Use only for testing.
  #insecure: true

  # Configure cipher suites to be used for TLS connections
  #cipher_suites: []

  # Configure curve types for ECDHE based cipher suites
  #curve_types: []

  # Configure minimum TLS version allowed for connection to logstash
  #min_version: 1.0

  # Configure maximum TLS version allowed for connection to logstash
  #max_version: 1.2

Logstash as output

logstash:
# The Logstash hosts
hosts: ["192.168.1.236:5044"]
bulk_max_size: 1024
# Number of workers per Logstash host.
#worker: 1

# Set gzip compression level.
#compression_level: 3

# Optional load balance the events between the Logstash hosts
#loadbalance: true

# Optional index name. The default index name depends on the each beat.
# For Packetbeat, the default is set to packetbeat, for Topbeat
# top topbeat and for Filebeat to filebeat.
#index: filebeat

# Optional TLS. By default is off.
tls:
  # List of root certificates for HTTPS server verifications
  #certificate_authorities: ["/etc/pki/root/ca.pem"]
  certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

  # Certificate for TLS client authentication
  #certificate: "/etc/pki/client/cert.pem"
  # Client Certificate Key
  #certificate_key: "/etc/pki/client/cert.key"
  # Controls whether the client verifies server certificates and host name.
  # If insecure is set to true, all server host names and certificates will be
  # accepted. In this mode TLS based connections are susceptible to
  # man-in-the-middle attacks. Use only for testing.
  #insecure: true
  # Configure cipher suites to be used for TLS connections
  #cipher_suites: []
  # Configure curve types for ECDHE based cipher suites
  #curve_types: []

{:timestamp=>"2016-05-03T11:56:30.008000-0400", :message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"http://192.168.1.236:9200/\"]', but Elasticsearch appears to be unreachable or down!", :error_message=>"Connection refused", :class=>"Manticore::SocketException", :client_config=>{:hosts=>["http://192.168.1.236:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}},

Your logstash can't reach your elasticsearch host. You may want to fix that.

How to check if certificate expired or not ?

I have checked using :
nc -v log_server 5044
Ncat: Version 6.40 ( http://nmap.org/ncat )
Ncat: Connected to 192.168.1.236:5044.

Thanks, any Idea how can I fix that I am not quite sure what is going wrong.
Logstash and elastic search are on the same node i.e 192.168.1.236

is elasticsearch running? is it listening on 192.168.1.236 port 9200? can you reach it via curl?

also you can use openssl to check if your certificate is expired or not by having it output your certicate to text and view the text.

Yes elasticsearch is running also

tcp 0 0 127.0.0.1:40689 127.0.0.1:9200 TIME_WAIT -
tcp 0 0 127.0.0.1:40749 127.0.0.1:9200 TIME_WAIT -
tcp6 0 0 127.0.0.1:9200 :::* LISTEN 8138/java
tcp6 0 0 ::1:9200 :::* LISTEN 8138/java

openssl x509 -enddate -noout -in logstash-forwarder.crt
notAfter=May 1 10:56:01 2026 GMT


ALso

[root@log_server certs]# curl http://192.168.1.236:9200
curl: (7) Failed connect to 192.168.1.236:9200; Connection refused
[root@log_server certs]# curl http://localhost:9200
{
"name" : "Legacy",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.3.2",
"build_hash" : "b9e4a6acad4008027e4038f6abed7f7dba346f94",
"build_timestamp" : "2016-04-21T16:03:47Z",
"build_snapshot" : false,
"lucene_version" : "5.5.0"
},
"tagline" : "You Know, for Search"
}

is ES just listening on localhost:9200 or is it also listening on the ip 192.168.1.236:9200? You configured logstash to send things to elasticsearch via 192.168.1.236 ip, but if elasticsearch isn't listening on the public interface and only on the loopback (127.0.0.1) then logstash can't communicate with elasticsearch.

1 Like

I have changed localhost to ip in /etc/elasticsearch/elasticsearch.yml in network.

Error regarding ip went off but now having : {:timestamp=>"2016-05-03T13:26:21.008000-0400", :message=>"Connection refused",


outputs/elasticsearch/http_client.rb:60:in start_sniffing!'", "org/jruby/ext/thread/Mutex.java:149:insynchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in start_sniffing!'", "org/jruby/RubyKernel.java:1479:inloop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:59:in start_sniffing!'"], :level=>:error} {:timestamp=>"2016-05-03T13:26:21.008000-0400", :message=>"Connection refused", :class=>"Manticore::SocketException", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:37:ininitialize'", "org/jruby/RubyProc.java:281:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:79:incall'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:256:in call_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:153:incode'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:71:in perform_request'", "org/jruby/RubyProc.java:281:incall'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:201:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:54:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/sniffer.rb:32:in hosts'", "org/jruby/ext/timeout/Timeout.java:147:intimeout'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/sniffer.rb:31:in hosts'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:76:inreload_connections!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:72:in sniff!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:instart_sniffing!'", "org/jruby/ext/thread/Mutex.java:149:in synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:instart_sniffing!'", "org/jruby/RubyKernel.java:1479:in `loop'", "/opt/logstash/ve

Can you provide me proper documentation where using which I can redeploy the whole setup on centos7

Error regarding ip went off but now having : {:timestamp=>"2016-05-03T13:26:21.008000-0400", :message=>"Connection refused",

So something's not listening on the expected port or the expected interface or there's a firewall blocking the access. Without more context it's impossible to tell which it is.

Can you provide me proper documentation where using which I can redeploy the whole setup on centos7

The kind of complete and recipe-based documentation that you seem to ask for is usually not that complete or it's outdated or for the wrong operating system or something else.

I am using this one does it look okay to you.

TIA

It's not obviously outdated so that's good, but I don't have time to review it in detail. It could totally point you in the right direction, but it's no substitute for understanding what's going on and being able to systematically debug problems.

I deployed whole setup again fresh one.

This time I am using logstash forwarder instead of filebeat. But I am not sure if it is also working fine.
usinfg : https://gist.github.com/ashrithr/c5c03950ef631ac63c43

When I go to dashboard either localhost or IP i get error
Error Could not contact Elasticsearch at http://localhost:9200. - kibana dashboard

any suggestions ?

Thanks

Don't use logstash-forwarder. It's deprecated and replaced by Filebeat.

Well, is ES running on localhost:9200? Can you run curl localhost:9200 on the same host that Kibana runs on?

That's a nice guide, but its missing a few step to help you confirm things are up and running.

After you install ES and configure it to listen to localhost in the elasticsearch.yml, start ES. check ES is working with curl (curl -XGET 'http://localhost:9200/_cluster/health?pretty=true') . Now install nginx and then test getting to ES via your proxy directory using curl. Once that's working, then proceed to Kibana (since its on the same server as ES and is getting proxy) and configure it. Check kibana access via nginx proxy. When you kibana is happy with nginx and ES, proceed to LS. Since ES is behind a https proxy, don't forget to configure LS using https with cert. Get each piece working before moving to the next piece. This is expecially true when you throw a proxy between the pieces of ELK.