I am trying to provide logs from filebeat to logstash.
But they are not sent and at server side if I try :
curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : 0.0,
"hits" : [ ]
}
}
I am stuck here I am not sending logs to Elasticsearch only logstash
here is my filebeat.yml for logstash
##########################
logstash:
# The Logstash hosts
hosts: ["192.168.1.236:5044"]
bulk_max_size: 1024
# Number of workers per Logstash host. #worker: 1
# Set gzip compression level.
#compression_level: 3
# Optional load balance the events between the Logstash hosts
#loadbalance: true
# Optional index name. The default index name depends on the each beat.
# For Packetbeat, the default is set to packetbeat, for Topbeat
# top topbeat and for Filebeat to filebeat.
#index: filebeat
# Optional TLS. By default is off.
tls:
# List of root certificates for HTTPS server verifications
#certificate_authorities: ["/etc/pki/root/ca.pem"]
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
# Certificate for TLS client authentication
#certificate: "/etc/pki/client/cert.pem"
Error :
2: INFO force_close_file is disabled
2016/05/03 15:24:15.503075 prospector.go:142: INFO Starting prospector of type: log
2016/05/03 15:24:15.503130 spooler.go:77: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2016/05/03 15:24:15.503184 log.go:113: INFO Harvester started for file: /var/log/messages
2016/05/03 15:24:15.503197 log.go:113: INFO Harvester started for file: /var/log/secure
2016/05/03 15:24:15.503865 crawler.go:78: INFO All prospectors initialised with 6 states to persist
2016/05/03 15:24:15.503885 registrar.go:87: INFO Starting Registrar
2016/05/03 15:24:15.503904 publish.go:88: INFO Start sending events to output
2016/05/03 15:24:20.525062 single.go:76: INFO Error publishing events (retrying): EOF
2016/05/03 15:24:20.525118 single.go:152: INFO send fail
2016/05/03 15:24:20.525142 single.go:159: INFO backoff retry: 1s
ector.go:132: INFO Set max_backoff duration to 10s
2016/05/03 14:58:01.366484 prospector.go:112: INFO force_close_file is disabled
2016/05/03 14:58:01.366536 prospector.go:142: INFO Starting prospector of type: log
2016/05/03 14:58:01.366624 crawler.go:78: INFO All prospectors initialised with 10 states to persist
2016/05/03 14:58:01.366673 registrar.go:87: INFO Starting Registrar
2016/05/03 14:58:01.366741 publish.go:88: INFO Start sending events to output
2016/05/03 14:58:01.366800 spooler.go:77: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
"/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in start_sniffing!'", "org/jruby/RubyKernel.java:1479:inloop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:59:in `start_sniffing!'"], :level=>:error}
{:timestamp=>"2016-05-03T11:56:29.952000-0400", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["http://192.168.1.236:9200/"]', but Elasticsearch appears to be unreachable or down!", :error_message=>"Connection refused", :class=>"Manticore::SocketException", :client_config=>{:hosts=>["http://192.168.1.236:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :level=>:error}
{:timestamp=>"2016-05-03T11:56:30.008000-0400", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["http://192.168.1.236:9200/"]', but Elasticsearch appears to be unreachable or down!", :error_message=>"Connection refused", :class=>"Manticore::SocketException", :client_config=>{:hosts=>["http://192.168.1.236:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :level=>:error}
Lfilebeat.yml is big one so just copied output block
############################# Output ##########################################
Configure what outputs to use when sending the data collected by the beat.
Multiple outputs may be used.
output:
Elasticsearch as output
#elasticsearch: #hosts: ["localhost:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "admin"
#password: "s3cr3t"
# Number of workers per Elasticsearch host.
#worker: 1
# Optional index name. The default is "filebeat" and generates
# [filebeat-]YYYY.MM.DD keys.
#index: "filebeat"
# A template is used to set the mapping in Elasticsearch
# By default template loading is disabled and no template is loaded.
# These settings can be adjusted to load your own template or overwrite existing ones
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90
# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1
# Boolean that sets if the topology is kept in Elasticsearch. The default is
# false. This option makes sense only for Packetbeat.
#save_topology: false
# The time to live in seconds for the topology information that is stored in
# Elasticsearch. The default is 15 seconds.
#topology_expire: 15
# tls configuration. By default is off.
#tls:
# List of root certificates for HTTPS server verifications
#certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for TLS client authentication
#certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#certificate_key: "/etc/pki/client/cert.key"
# Controls whether the client verifies server certificates and host name.
# If insecure is set to true, all server host names and certificates will be
# accepted. In this mode TLS based connections are susceptible to
# man-in-the-middle attacks. Use only for testing.
#insecure: true
# Configure cipher suites to be used for TLS connections
#cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#curve_types: []
# Configure minimum TLS version allowed for connection to logstash
#min_version: 1.0
# Configure maximum TLS version allowed for connection to logstash
#max_version: 1.2
Logstash as output
logstash:
# The Logstash hosts
hosts: ["192.168.1.236:5044"]
bulk_max_size: 1024
# Number of workers per Logstash host. #worker: 1
# Set gzip compression level.
#compression_level: 3
# Optional load balance the events between the Logstash hosts
#loadbalance: true
# Optional index name. The default index name depends on the each beat.
# For Packetbeat, the default is set to packetbeat, for Topbeat
# top topbeat and for Filebeat to filebeat.
#index: filebeat
# Optional TLS. By default is off.
tls:
# List of root certificates for HTTPS server verifications
#certificate_authorities: ["/etc/pki/root/ca.pem"]
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
# Certificate for TLS client authentication
#certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#certificate_key: "/etc/pki/client/cert.key"
# Controls whether the client verifies server certificates and host name.
# If insecure is set to true, all server host names and certificates will be
# accepted. In this mode TLS based connections are susceptible to
# man-in-the-middle attacks. Use only for testing.
#insecure: true
# Configure cipher suites to be used for TLS connections
#cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#curve_types: []
{:timestamp=>"2016-05-03T11:56:30.008000-0400", :message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"http://192.168.1.236:9200/\"]', but Elasticsearch appears to be unreachable or down!", :error_message=>"Connection refused", :class=>"Manticore::SocketException", :client_config=>{:hosts=>["http://192.168.1.236:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}},
Your logstash can't reach your elasticsearch host. You may want to fix that.
is ES just listening on localhost:9200 or is it also listening on the ip 192.168.1.236:9200? You configured logstash to send things to elasticsearch via 192.168.1.236 ip, but if elasticsearch isn't listening on the public interface and only on the loopback (127.0.0.1) then logstash can't communicate with elasticsearch.
Error regarding ip went off but now having : {:timestamp=>"2016-05-03T13:26:21.008000-0400", :message=>"Connection refused",
So something's not listening on the expected port or the expected interface or there's a firewall blocking the access. Without more context it's impossible to tell which it is.
Can you provide me proper documentation where using which I can redeploy the whole setup on centos7
The kind of complete and recipe-based documentation that you seem to ask for is usually not that complete or it's outdated or for the wrong operating system or something else.
It's not obviously outdated so that's good, but I don't have time to review it in detail. It could totally point you in the right direction, but it's no substitute for understanding what's going on and being able to systematically debug problems.
That's a nice guide, but its missing a few step to help you confirm things are up and running.
After you install ES and configure it to listen to localhost in the elasticsearch.yml, start ES. check ES is working with curl (curl -XGET 'http://localhost:9200/_cluster/health?pretty=true') . Now install nginx and then test getting to ES via your proxy directory using curl. Once that's working, then proceed to Kibana (since its on the same server as ES and is getting proxy) and configure it. Check kibana access via nginx proxy. When you kibana is happy with nginx and ES, proceed to LS. Since ES is behind a https proxy, don't forget to configure LS using https with cert. Get each piece working before moving to the next piece. This is expecially true when you throw a proxy between the pieces of ELK.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.