Hi everyone!
I'm relatively new to the Elasticstack and currently trying to set up a centralized dashboard for the logs of all my servers. (And also use the system to replicate all logs to my monitoring server, where I'm currently trying to install the 7.x Elasticstack on)
My problem is that Kibana doesn't visualize ssh login attempts although it seems to get the data as "system.auth".
Screenshot of the Dashboard:
Screenshot of the "Logs" section:
Setup:
Server1 (CentOS 7): monitoring.myhost.tld (10.10.10.10 external IP, 1.1.1.1 internal IP)
- Elasticsearch (listening on localhost)
- Kibana (listening on localhost)
- Logstash (listening on 1.1.1.1)
- Filebeat
- Nginx (Used as a reverse proxy for Kibana, listening on 10.10.10.10)
Server2 (Debian 10)
- Filebeat
Server3 (CentOS 7)
- Filebeat
and so on... (All are Debian or CentOS)
Config files (I just wrote down the changes I made):
Server1 (Elasticsearch):
/etc/elasticsearch/elasticsearch.yml:
network.host: localhost
Server1 (Kibana):
//No changes
Server1 (Logstash):
/etc/logstash/conf.d/02-beats-input.conf:
input {
beats {
port => "5044"
host => "1.1.1.1"
}
}
/etc/logstash/conf.d/10-syslog-filter.conf:
filter {
if [fileset][module] == "system" {
if [fileset][name] == "auth" {
grok {
match => { "message" => ["%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} %{DATA:[system][auth][ssh][method]} for (invalid user )?%{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]} port %{NUMBER:[system][auth][ssh][port]} ssh2(: %{GREEDYDATA:[system][auth][ssh][signature]})?",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} user %{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: Did not receive identification string from %{IPORHOST:[system][auth][ssh][dropped_ip]}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sudo(?:\[%{POSINT:[system][auth][pid]}\])?: \s*%{DATA:[system][auth][user]} :( %{DATA:[system][auth][sudo][error]} ;)? TTY=%{DATA:[system][auth][sudo][tty]} ; PWD=%{DATA:[system][auth][sudo][pwd]} ; USER=%{DATA:[system][auth][sudo][user]} ; COMMAND=%{GREEDYDATA:[system][auth][sudo][command]}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} groupadd(?:\[%{POSINT:[system][auth][pid]}\])?: new group: name=%{DATA:system.auth.groupadd.name}, GID=%{NUMBER:system.auth.groupadd.gid}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} useradd(?:\[%{POSINT:[system][auth][pid]}\])?: new user: name=%{DATA:[system][auth][useradd][name]}, UID=%{NUMBER:[system][auth][useradd][uid]}, GID=%{NUMBER:[system][auth][useradd][gid]}, home=%{DATA:[system][auth][useradd][home]}, shell=%{DATA:[system][auth][useradd][shell]}$",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} %{DATA:[system][auth][program]}(?:\[%{POSINT:[system][auth][pid]}\])?: %{GREEDYMULTILINE:[system][auth][message]}"] }
pattern_definitions => {
"GREEDYMULTILINE"=> "(.|\n)*"
}
remove_field => "message"
}
date {
match => [ "[system][auth][timestamp]", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
geoip {
source => "[system][auth][ssh][ip]"
target => "[system][auth][ssh][geoip]"
}
}
else if [fileset][name] == "syslog" {
grok {
match => { "message" => ["%{SYSLOGTIMESTAMP:[system][syslog][timestamp]} %{SYSLOGHOST:[system][syslog][hostname]} %{DATA:[system][syslog][program]}(?:\[%{POSINT:[system][syslog][pid]}\])?: %{GREEDYMULTILINE:[system][syslog][message]}"] }
pattern_definitions => { "GREEDYMULTILINE" => "(.|\n)*" }
remove_field => "message"
}
date {
match => [ "[system][syslog][timestamp]", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
}
/etc/logstash/conf.d/30-elasticsearch-output.conf:
output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
Server1/Server2/ServerX... (Filebeat):
//I commented out:
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
//And commented in:
output.logstash:
# The Logstash hosts
hosts: ["1.1.1.1:5044"]
and so on... (The configuration on Server2 to ServerX for filebeat is always the same)
Further information:
- All internal IPs are pingable
- I installed adoptopenjdk-11-hotspot for logstash
- I mainly used this tutorial to set up everything: https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elastic-stack-on-centos-7
- Obviously, I replaced the IPs in the post. I did not use 1.1.1.1 in my configs
Is there anyone here who can help me with this problem or provide a link if someone already had similar problems? (I wasn't able to find anything)
Best regards,
Arnim