ELK issue

Hi,

I am new to ELK and configured ELK by following some online doc.
I got the dashboard but my syslog or nginx access logs are not appearing in kibana.
Here I am posting my config files.Please help

SSl certificate:

nano /etc/ssl/openssl.cnf
subjectAltName = IP:192.168.57.128
then I have generated certificate as mentioned in the link

logstash.conf

input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/ssl/logstash-forwarder.crt"
ssl_key => "/etc/ssl/logstash-forwarder.key"
congestion_threshold => "40"
}
}

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGLINE}" }
}

date {                                                         

match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}

}

output {
elasticsearch {
hosts => localhost
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
}
stdout {
codec => rubydebug
}
}

filebeat.yml

  • input_type: log

    Paths that should be crawled and fetched. Glob based paths.

    paths:

    • /var/log/syslog
    • /var/log/nginx/access.log

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:

Array of hosts to connect to.

#hosts: ["localhost:9200"]

Optional protocol and basic auth credentials.

#protocol: "https"
#username: "elastic"
#password: "changeme"

output.logstash: outputs to use when sending the data collected by the beat.

The Logstash hosts be used.

hosts: ["192.158.57.128:5044"]

Optional SSL. By default is off.

List of root certificates for HTTPS server verifications

ssl.certificate_authorities: ["/etc/ssl/logstash-forwarder.crt"]

Certificate for SSL client authentication

#ssl.certificate: "/etc/pki/client/cert.pem"

Client Certificate Key

#ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

Sets log level. The default log level is info.

Available log levels are: critical, error, warning, info, debug

logging.level: debug

filebeat log output in debug mode:

2017-10-24T09:39:20-07:00 DBG Check file for harvesting: /var/log/syslog
2017-10-24T09:39:20-07:00 DBG Update existing file for harvesting: /var/log/syslog, offset: 1101851
2017-10-24T09:39:20-07:00 DBG Harvester for file is still running: /var/log/syslog
2017-10-24T09:39:20-07:00 DBG Check file for harvesting: /var/log/nginx/access.log
2017-10-24T09:39:20-07:00 DBG Update existing file for harvesting: /var/log/nginx/access.log, offset: 10541
2017-10-24T09:39:20-07:00 DBG Harvester for file is still running: /var/log/nginx/access.log
2017-10-24T09:39:20-07:00 DBG Prospector states cleaned up. Before: 2, After: 2
2017-10-24T09:39:24-07:00 ERR Connecting error publishing events (retrying): dial tcp 192.158.57.128:5044: getsockopt: connection refused
2017-10-24T09:39:24-07:00 DBG send fail
2017-10-24T09:39:30-07:00 INFO No non-zero metrics in the last 30s

Please help to rectify the issue.

Please edit your post and use the </> button to format the config and logs, it will make it much easier for people to read and therefore assist with.

Also, FYI we’ve renamed ELK to the Elastic Stack, otherwise Beats feels left out :wink:

Finally, is there anything in stdout from Logstash?

Please properly format your post.

I'm seeing a 'connection refused' error. Is Logstash running/reachable?

Please checkout the Filebeat Getting Started Guide.

For your nginx use-case maybe have a look at filebeat modules (Requires Filebeat, Elasticsearch and Kibana only).

Also check out Securing Filebeat Communication docs.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.