[LOGSTASH] How to pars logs with SSH


I'm new in ELK and i don't understand one thing.

I installed ELK with basic options. I did some small tests locally with the "File" input and it worked fine.
Now, I wanted to parse the logs for a remote machine but I could only do it halfway.

I use the tcp and udp input with a specific port for the syslog file. I managed to get my index from Kibana but it doesn't really work because I have a grokfailure on the discover and I don't have all the fields that have been parsed unlike the local one.

I think it's the SSH level that is not setting up well and I know that I should probably notify as ssh "root@IPaddress" but I don't have a specific parameter to indicate this. Is it necessary to use an SSL certificate?

  • My SSH is configured to 8444 port in sshd_config
  • I don't have any errors when i start my pipeline
  • I want to test without Beats
  • I don't want to use TELNET

And here, you can see the error message in kibana:


TCP and INPUT input doesn't work with SSH ?

Here is the logstash file:

  tcp {
    port => 8444
    type => syslog
    mode => "client"
    host => ""
  udp {
    port => 8444
    type => syslog

filter {
  grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
    add_field => [ "received_at", "%{@timestamp}" ]
    add_field => [ "received_from", "%{host}" ]
  date {
    match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "logstash-sysloghaproxy-%{+YYYY.MM.dd}"

Nobody knows ? :confused:

A tcp input expects lines of text separated with newlines. It does not speak the SSH protocol. If you want to use SSH across the network you would have to use an SSH tunnel (port forwarding) at the logstash end to connect to the input.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.