Dashboards are not picking right fields

I am sending data from multiple Linux servers to Logstash using Filebeat, which then forwards the data to Elasticsearch after applying parsing rules for SSH logs. The problem is that when I open the default SSH dashboards, the graphs are not displaying properly. After some research and experimentation, I discovered that there are duplicate fields in the dashboards, which is causing the issue.

In the SSH login dashboard, there are two fields: (i) system.auth.ssh.event and (ii) system.auth.ssh.event.keyword. The dashboard data is being stored in the second field instead of the first one, and I'm unable to determine where this second field is coming from.

I created a custom template and set the field type to "text," but the dashboards are still using the field as system.auth.ssh.event.keyword.

I hope this explanation clarifies the situation.

Here is my logstash rule file:

> filter {
>    grok {
>       match => {
>         "message" => [
>           "%{SYSLOGTIMESTAMP} %{HOSTNAME:host.name} sshd\[%{NUMBER:pid}]: %{DATA:system.auth.ssh.event} %{DATA:system.auth.ssh.method} for (invalid user )?%{DATA:user.name} from %{IP:source.ip} port %{NUMBER:port} %{GREEDYDATA}",
>           "%{SYSLOGTIMESTAMP} %{HOSTNAME:host.name} sshd\[%{NUMBER:pid}]: %{GREEDYDATA}: %{DATA:system.auth.ssh.event}; %{DATA}=%{DATA}=%{DATA}=%{DATA}=%{DATA}=%{DATA}=%{IP:source.ip}",
>           "%{SYSLOGTIMESTAMP} %{HOSTNAME:host.name} sshd\[%{NUMBER:pid}]: %{GREEDYDATA}: %{DATA:system.auth.ssh.event} for (invalid user )?%{DATA:user.name} from %{IP:source.ip} port %{NUMBER:port} %{GREEDYDATA}",
>           "\[%{TIMESTAMP_ISO8601:timestamp}\] \[%{WORD:module}:notice\] \[%{GREEDYDATA} %{POSINT:pid}:%{GREEDYDATA} %{POSINT:pid}\] %{WORD}: %{DATA}, %{WORD:service.status}%{SPACE}%{WORD:service.status}",
>           "%{TIME}Z %{NUMBER} \[%{DATA}] %{DATA:system.auth.ssh.event} for user %{QUOTEDSTRING:user.name}@%{QUOTEDSTRING:source.ip}",
>           "%{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:hostname} sshd\[%{NUMBER:pid}\]: Connection closed by authenticating user %{USERNAME:user.name} %{IP:source.ip} port %{NUMBER:port} \[%{DATA:auth_status}\]",
>           "%{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:host} sshd\[%{NUMBER:pid}\]: %{GREEDYDATA:system.auth.ssh.event} user %{USERNAME:user.name} from %{IP:source.ip} port %{NUMBER:port}"
>  ]
>   }
>  }
> }
> output {
>   elasticsearch {
>     hosts => ["http://x.x.x.x:9200"]
>     user => elastic
>     password => LOupmnhp
>     index => "filebeat-"
> 
>   }
> }

@huzaifa224

Apologies it is not clear exactly what you are trying to accomplish, not what the issue is.
It looks like you have a mismatch in the the OOTB Dashboard, templates and your data.

First What version of the Stack?

Second, are you trying to use the Filebeat System Module for the SSH?

That is how the OOTB Dashboards would work? Using Filebeat

I highly recommend getting this to work on one of your hosts first.

Filebeat System Module -> Elasticsearch (No Logstash)

Using the method described in the Filebeat Quick Start?

Follow these directions and get it to work.

running
filebeat setup -e
is key steps where filebeat needs to point to elasticsearch and Kibana it loads the templates and the dashboards.

Then start filebeat And See if the Dashboards Work (you will probably need to clean up the filebeat indices)

Get this to work BEFORE you try putting logstash in the middle.

If so then you can work on putting logstash in the middle ...

Filebeat System Module -> Logsash -> Elasticsearch

Depending on the version you will need to use

NOTE these are different.

8.x

7.x

Also ignore this ...
filebeat setup --pipelines --modules nginx,system

IF you already ran
filebeat setup -e when you were going Filebeat -> Elasticsearch Direct.

system.auth.ssh.event will be mapped to a keyword type if you properly load the templates

Are you talking about these Dashboards?

system.auth.ssh.event will be mapped to a keyword type if you properly load the templates

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.