hello
i'm using ELK on my virtual machine centos 7 using vSphere client
i have some logs to visualise /var/log/secure
i've connected my kibana via inginx and its working fine !
i've set this configuration for /etc/logstash/conf.d/sshd.conf
input {
file {
type => "secure_log"
path => "/var/log/secure"
}
}
filter {
include "pattern.txt"
grok {
add_tag => [ "sshd_fail" ]
match => { "message" => "Failed %{WORD:sshd_auth_type} for %{USERNAME:sshd_invalid_user} from %{IP:sshd_client_ip} port %{NUMBER:sshd_port} %{GREEDYDATA:sshd_protocol}" }
}
}
output {
elasticsearch {
index => "sshd_fail-%{+YYYY.MM}"
}
}
and i made a file .txt called pattern where i entred the pattern for my logs
/etc/logstash/pattern/pattern.txt
%{SYSLOGTIMESTAMP:system.auth.timestamp} %{SYSLOGHOST:system.auth.hostname} sshd(?:\[%{POSINT:system.auth.pid}\])?: %{DATA:system.auth.ssh.event} %{DATA:system.auth.ssh.method} for (invalid user )?%{DATA:system.auth.user} from %{IPORHOST:system.auth.ip} port %{NUMBER:system.auth.port} ssh2(: %{GREEDYDATA:system.auth.ssh.signature})?
In the screenshot you shared (which tells you how to configure index patterns), replace logstash-* with sshd_fail-* in the text field.
That would then match any index in Elasticsearch beginning with sshd_fail-.
Have you got any data indexed at all into Elasticsearch (use the cat indices API to find out)? If not, have a look at how synced_path and start_position parameters are used with the file input plugin in this tutorial.
Does your Logstash start correctly? Because this configuration
seems wrong to me, include is not a directive.
(Also, you should rework your grok configuration, the patterns.txt isn't the place for the complete pattern to match a log line, but for individuals patterns used in the match parameter)
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.