Syslog pipeline => [host] error in ElasticSearch

Hi,
Sorry may be a newbie question but, I read lots of treads and examples on the net
to setup syslog pipeline but I still cannot make my logstash pipeline to function in ES.

platform : CentOS 8.3. Epel versions for : Logstash v 7.10.2 / ElasticSearch v 7.10.2

   input {
        syslog {
                port => 5514
        }
    }

    filter {
        grok {
            match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
            add_field => [ "received_at", "%{@timestamp}" ]
            add_field => [ "received_from", "%{host}" ]
        }
        date {
            match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
        }
    }
    output {
        elasticsearch { # output to ES
            hosts => [ "es-host:9200" ]
        }
        stdout { codec => rubydebug }
    }

Nota bene:
I tried a different input like a simple tcp port without "syslog" module, same error.
I tried to suppress grok bloc, same error.
I also tried to add a mutate as explained in some posts on the net, but if I do so, the filter does not "compile":

  # does not compile:
  mutate {  replace { "[host]" => "[host][name]" }  }

Error from ElasticSearch seen on Logstash:

    Could not index event to Elasticsearch. [...] , "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", 
    "reason"=>"object mapping for [host] tried to parse field [host] as object, but found a concrete value"

Any help would be greatly appreciated, thanks

Can you give a sample of what the syslog looks like?

Precision: I'm using "classic" rsyslog to forward syslogs to logstash server on port 5514

I hope this is what you want, I got it from stdout ouput:

 {
    "@version" => "1",
    "message" => "<11>Jan 29 16:17:56 my-server setroubleshoot: SELinux is preventing ... etc.",
    "host" => "my-server.example.com",
    "@timestamp" => 2021-01-29T15:17:56.229Z,
    "port" => 53280
 }

Another one:

{
    "priority" => 4,
    "message" => "REJECTED: IN=eth0 OUT= MAC=00: etc.",
    "facility_label" => "kernel",
    "severity" => 4,
    "timestamp" => "Jan 29 16:25:59",
    "facility" => 0,
    "host" => "192.168.0.2",
    "@version" => "1",
    "program" => "kernel",
    "logsource" => "my-server",
    "@timestamp" => 2021-01-29T15:25:59.000Z,
    "severity_label" => "Warning"
}

I can't replicate the issue. The below worked with no errors. Is there something you see different than what I am doing?

input {
  generator {
    lines => [
     '<11>Jan 29 16:17:56 my-server setroubleshoot: SELinux is preventing ... etc.'
    ]
    count => 1
    codec => "line"
  }
}
filter {
  grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
    add_field => [ "received_at", "%{@timestamp}" ]
    add_field => [ "received_from", "%{host}" ]
  }
  date {
    match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
  }
}
output {
  elasticsearch { # output to ES
    hosts => [ "localhost:9200" ]
  }
  stdout { codec =>  "rubydebug" }
}

Sorry, I had commented out the grok block, here is some output now:

 {
      "severity_label" => "Notice",
       "logsource" => "my-server-test",
       "message" => "SELinux is preventing  etc.",
       "@version" => "1",
       "host" => "192.168.52.123",
       "tags" => [
            [0] "_grokparsefailure"
        ],
        "program" => "python",
        "severity" => 5,
        "facility_label" => "user-level",
        "priority" => 13,
        "timestamp" => "Jan 29 16:55:52",
        "facility" => 1,
        "@timestamp" => 2021-01-29T15:55:52.000Z
}

Can you just run the below so I can see the raw data? I am not getting a good picture on what your actual logs look like.

input { syslog { port => 5514 } }
output { stdout { codec => rubydebug } }

Well I have no more output in /var/log/messages with this :frowning:

Here is the ascii part of tcpdump for some incoming logs:

<11>Jan 29 17:20:39 server-test setroubleshoot: SELinux is preventing [snip]
<4>Jan 29 17:20:40 server2 kernel: REJECTED: IN=eth0 OUT= MAC=00 [snip]

OK they are back: Here is /var/log/messages on logstash server:

{
    "severity_label" => "Notice",
    "severity" => 5,
    "@timestamp" => 2021-01-29T16:25:54.000Z,
    "message" => "SELinux is preventing  [snip]",
    "timestamp" => "Jan 29 17:25:54",
    "logsource" => "server-test",
    "host" => "192.168.52.123",
    "facility" => 1,
    "facility_label" => "user-level",
    "program" => "python",
    "priority" => 13,
    "@version" => "1"
 }

Both your test messages work correctly for me with the configuration you have in the first post. I am not seeing what the issue is.

Are you still getting errors? If so would need to know the error, with the raw data source, and configuration used to help out.

OK thanks, I need to leave now, I'll be back on Monday with more diagnostic data I hope,
have a nice weekend

1 Like

Hi,
I've been working on this issue for hours, and finally it works !
I've updated and rebooted all my elastic stack servers then
the only thing I changed finally, is to add an "index" directive in the Elasticsearch output:

input { syslog { port => 5514 } }

filter { }

output {
    elasticsearch {
            hosts => [ "elastic-host:9200" ]
            index => [ "logstash-syslog-%{+yyyy.MM.dd}" ]
    }
}

Now I just need to parse more efficiently syslog messages, but that's another job :slight_smile:

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.