I am sending data from multiple Linux servers to Logstash using Filebeat, which then forwards the data to Elasticsearch after applying parsing rules for SSH logs. The problem is that when I open the default SSH dashboards, the graphs are not displaying properly. After some research and experimentation, I discovered that there are duplicate fields in the dashboards, which is causing the issue.
In the SSH login dashboard, there are two fields: (i) system.auth.ssh.event
and (ii) system.auth.ssh.event.keyword
. The dashboard data is being stored in the second field instead of the first one, and I'm unable to determine where this second field is coming from.
I created a custom template and set the field type to "text," but the dashboards are still using the field as system.auth.ssh.event.keyword
.
I hope this explanation clarifies the situation.
Here is my logstash rule file:
> filter {
> grok {
> match => {
> "message" => [
> "%{SYSLOGTIMESTAMP} %{HOSTNAME:host.name} sshd\[%{NUMBER:pid}]: %{DATA:system.auth.ssh.event} %{DATA:system.auth.ssh.method} for (invalid user )?%{DATA:user.name} from %{IP:source.ip} port %{NUMBER:port} %{GREEDYDATA}",
> "%{SYSLOGTIMESTAMP} %{HOSTNAME:host.name} sshd\[%{NUMBER:pid}]: %{GREEDYDATA}: %{DATA:system.auth.ssh.event}; %{DATA}=%{DATA}=%{DATA}=%{DATA}=%{DATA}=%{DATA}=%{IP:source.ip}",
> "%{SYSLOGTIMESTAMP} %{HOSTNAME:host.name} sshd\[%{NUMBER:pid}]: %{GREEDYDATA}: %{DATA:system.auth.ssh.event} for (invalid user )?%{DATA:user.name} from %{IP:source.ip} port %{NUMBER:port} %{GREEDYDATA}",
> "\[%{TIMESTAMP_ISO8601:timestamp}\] \[%{WORD:module}:notice\] \[%{GREEDYDATA} %{POSINT:pid}:%{GREEDYDATA} %{POSINT:pid}\] %{WORD}: %{DATA}, %{WORD:service.status}%{SPACE}%{WORD:service.status}",
> "%{TIME}Z %{NUMBER} \[%{DATA}] %{DATA:system.auth.ssh.event} for user %{QUOTEDSTRING:user.name}@%{QUOTEDSTRING:source.ip}",
> "%{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:hostname} sshd\[%{NUMBER:pid}\]: Connection closed by authenticating user %{USERNAME:user.name} %{IP:source.ip} port %{NUMBER:port} \[%{DATA:auth_status}\]",
> "%{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:host} sshd\[%{NUMBER:pid}\]: %{GREEDYDATA:system.auth.ssh.event} user %{USERNAME:user.name} from %{IP:source.ip} port %{NUMBER:port}"
> ]
> }
> }
> }
> output {
> elasticsearch {
> hosts => ["http://x.x.x.x:9200"]
> user => elastic
> password => LOupmnhp
> index => "filebeat-"
>
> }
> }