I have a Logstash, Elasticsearch and Kibana service running on a server and Filebeat running on another server to test configurations. Everything is running fine with the following config, logs are forwarded to my elasticsearch/kibana.
input {
beats {
port => 5045
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
The thing is that server admins are used to browse logs in command line interface and want access to them so I found out about the File plugin and decided to try it out.
I modified the output section for and added file plugin:
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
if [type] == "syslog" {
file {
#flush_interval => 10
path => "/var/data/syslog/%{+YYYY}/%{+MM}/%{+dd}/%{source_host}/%{syslog_file_name}.log"
#codec => { line { format => "" }}
}
}
}
I used the "service logstash configtest" command and my files are "ok".
I restarted my services ES,Kibana and logstash.
I restarted my filebeat client but it now gives me error:
2016/06/21 18:09:54.477774 transport.go:125: ERR SSL client failed to connect with: dial tcp ip_address:5045: getsockopt: connection refused"
Meanwhile my logstash service goes Active: Active (exited) instead of Active (running).
Firewalld is disableded and yes I have the correct certificate in the correct folder as the whole config is working fine before the ouput file is modified.
tailf /var/log/logstash/logstash.log
{:timestamp=>"2016-06-23T09:56:58.919000-0400",
:message=>"UDP listener died",
:exception=>#<SocketError: bind: name or service not known>,
:backtrace=>["org/jruby/ext/socket/RubyUDPSocket.java:160:in `bind'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-2.0.5/lib/logstash/inputs/udp.rb:67:in `udp_listener'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-2.0.5/lib/logstash/inputs/udp.rb:50:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:334:in `inputworker'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:328:in `start_input'"],
:level=>:warn}
I am probably missing something so if you have any hints let me know.
WL