SSH login attempts dashboard on kibana

Hello Team,

I am using ELK 6.4.0 and Beat (Filebeat, Metricbeat). My architecture is Filebeat->Logstash->Elasticsearch->Kibana.

I am sending my auth.log using filebeat but i am not using filebeat system module. Because Filebeat system module can't use directly with logstash. So i am using logstash pipeline. My Grok filter for auth.log is looks like below:

grok {
match => { "message" => ["%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} %{DATA:[system][auth][ssh][method]} for (invalid user )?%{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]} port %{NUMBER:[system][auth][ssh][port]} ssh2(: %{GREEDYDATA:[system][auth][ssh][signature]})?", "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} user %{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]}", "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: Did not receive identification string from %{IPORHOST:[system][auth][ssh][dropped_ip]}", "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sudo(?:\[%{POSINT:[system][auth][pid]}\])?: \s*%{DATA:[system][auth][user]} :( %{DATA:[system][auth][sudo][error]} ;)? TTY=%{DATA:[system][auth][sudo][tty]} ; PWD=%{DATA:[system][auth][sudo][pwd]} ; USER=%{DATA:[system][auth][sudo][user]} ; COMMAND=%{GREEDYDATA:[system][auth][sudo][command]}", "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} groupadd(?:\[%{POSINT:[system][auth][pid]}\])?: new group: name=%{DATA:system.auth.groupadd.name}, GID=%{NUMBER:system.auth.groupadd.gid}", "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} useradd(?:\[%{POSINT:[system][auth][pid]}\])?: new user: name=%{DATA:[system][auth][user][add][name]}, UID=%{NUMBER:[system][auth][user][add][uid]}, GID=%{NUMBER:[system][auth][user][add][gid]}, home=%{DATA:[system][auth][user][add][home]}, shell=%{DATA:[system][auth][user][add][shell]}$", "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} %{DATA:[system][auth][program]}(?:\[%{POSINT:[system][auth][pid]}\])?: %{GREEDYMULTILINE:[system][auth][message]}"] }
pattern_definitions => {
        "GREEDYMULTILINE"=> "(.|\n)*"
      }

The auth.log are reaching on kibana dashboard. But when i am checking the Filebeat dashbaord for SSH login attempts i am not seeing any value for filed system.auth.ssh.geoip.country_iso_code. Please refer the below screenshot:

I have checked on kibana and found that this filed is not created and not available in log. Please refer the below screenshot:

Can we replace the field system.auth.ssh.geoip.country_iso_code with system.auth.ssh.geoip.country_code2 from visualize in kibana? I have checked but didn't found any such option or can we add any field in grok filter for Auth log?

I want to trace the country name also from where we are tried to SSH our servers.

Please help me to fix the issue. Any assistance will be appreciated.

Thanks in advance.

Hello Team,

Can you please help me on above issue?

Thanks.

Hello,

This is a Logstash questions, you should ask in that forum as we can't really help you here.

@Marius, Thank you for your response.

Sure, i will ask it in logstash forum.

But i have one question for you, Can we replace the field system.auth.ssh.geoip.country_iso_code with system.auth.ssh.geoip.country_code2 in Visualize of SSH Login attempts?

Thanks.

Yeah, just edit the visualization and change the field. Shouldn't create any problems.

@marius, Thank you for your response.

I have tried to replace the field system.auth.ssh.geoip.country_iso_code with system.auth.ssh.geoip.country_code2 in Visualize of SSH Login attempts. But didn't found any such option. Can you please let me know where i can find this filed and replace?

I have tried to add sub-bucket but that also didn't work.

Please refer the below screenshot:

Thanks.

You need to look in the saved searches, in the Discover tab, at the SSH login attempts saved object. Then you find the field in the left list of fields, remove the system.auth.ssh.geoip.contry_iso_code field with system.auth.ssh.geoip.country_code2.

@Marius, I have made the changes as suggested by you i.e remove that filed and added new fields as per our requirements and saved the search.
Please refer the below screenshot:

But still when i am checking the Filebeat dashboard for SSH Login Attempts under Dashboard tab its showing old fields and not updated with the new fields.
I have restarted the Kibana service as well after making the changes.
Please refer the below screenshot:

Can you please help me?

Thanks.

Did you save the search without having checked the Save as new search checkbox after you changed the column? There is no need to restart the Kibana service for any changes to the saved objects.

@Marius,

Yes, i have saved the serach without checked the Save as new search.

And the dashboard still didn't update with the new value displayed in the column? If this is the case, you can try removing the saved search from the dashboard and adding it again.

Yes...dashboard is still not updated with new values displayed in the column.

Can you please tell me it step by step. Because i have deleted the [Filebeat System] SSH login attempts dashboard in my testing environment and added it again from the saved searches. Now dashboard is created but its not showing under Filebeat SSH login dashboard. Its breakdown.

I don't want break anything in my production environment.

So please help me.

Thanks.

Can you post a screenshot of your Filebeat SSH login dashboard? It would help me to understand what exactly is your status right now.

@Marius, Thank you for your quick response.

Please find the screenshots. I have attached 2 screenshots because its not covered whole data in single screenshot.


Ok, so, going from the start, the steps would be:

  1. Change the SSH Login Attempts saved search as you've done it before.
  2. Open the dashboard, click on Edit on top right and some borders will be shown for each panel on the dashboard.
  3. Remove the SSH Login Attempts panel and then click on Add to add it again with the new fields.

That should be it. Even if something get's messed up, just don't click save so this wouldn't be a destructive action.

1 Like

@Marius, Thank you for your prompt response.
Now its working fine and showing all required fields in column.

Really appreciated your efforts. :slight_smile:

Thanks once again.

1 Like

Some please share the exact syntax for the ssh root attempts dashboard on kibana or in the json format. also I need to monitor the queue length of the logs

@Prasobh,

Sorry didn't get you. Can you please elaborate little bit more?
If you are using the filebeat then you have default Filebeat dashboard for SSH login attempts. You need to load the filebeat dashboard and its one time setup.

Thanks.

Okay. thanks. Currently I am using logstash with kibana.. let me try with filebeat and I will come back to you

[root@k8s-master ~]# more /etc/filebeat/filebeat.yml

filebeat.inputs:

  • type: log
    enabled: false
    paths:

    • /var/log/.log
      filebeat.config.modules:
      path: ${path.config}/modules.d/
      .yml
      reload.enabled: false
      setup.template.settings:
      index.number_of_shards: 3
      setup.kibana:
      output.elasticsearch:
      hosts: ["XXXXXXX:9200"]

    protocol: "https"

    username: "elasticsearch"

    #password: "elasticsearch"

[root@k8s-master ~]#

I have configured the filebeat in the client side side , but when I am trying to start the service I am getting below error

Oct 11 16:52:27 k8s-master systemd[1]: Unit filebeat.service entered failed state.
Oct 11 16:52:27 k8s-master systemd[1]: filebeat.service failed.
Oct 11 16:52:27 k8s-master systemd[1]: filebeat.service holdoff time over, scheduling restart.
Oct 11 16:52:27 k8s-master systemd[1]: start request repeated too quickly for filebeat.service
Oct 11 16:52:27 k8s-master systemd[1]: Failed to start Filebeat sends log files to Logstash or directly to Elasticsearch..
Oct 11 16:52:27 k8s-master systemd[1]: Unit filebeat.service entered failed state.
Oct 11 16:52:27 k8s-master systemd[1]: filebeat.service failed.