Logstash vs rsyslog java stack multiline issue


(Marcello A) #1

Hi All,
Does someone configure a multiline event starting from rsyslog messages? We tried to implement a catch all logstash instance for all the messages forwarded by docker daemons via syslog but we can't parse the multiline events.

The our configuration is :slight_smile:

input {
  syslog {
    type => "docker-dev-support"
    host => "0.0.0.0"
    port => 6000
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => false
      what => "previous"
    }
  }
} 

Thanks,
Marcello


(Magnus Bäck) #2

You'd need to have negate => true.


(Marcello A) #3

We have already configured it to true but all the messages have been discarded. If we keep it to false we noticed the messages as single event per row.

Marcello


(Magnus Bäck) #4

Then perhaps the match expression is wrong then. negate => false will not work.


(Marcello A) #5

We configured the parameter and this is the full configuration:

input {
  syslog {
    type => "docker-dev-support"
    host => "0.0.0.0"
    port => 6000
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => "previous"
    }
  }
}

filter {
  if [type] == "docker-dev-support" {
    if "_grokparsefailure_sysloginput" in [tags] {
      mutate {
        remove_tag => [ "_grokparsefailure_sysloginput" ]
      }
    }
    mutate {
      remove_field => [ "message" ]
    }
  }
}

output {
  if [type] == "docker-dev-support" {
   elasticsearch {
     action => "index"
     codec => "plain"
     hosts => [ "http://localhost:9200" ]
     index => "logstash-docker-test-dev-support-%{+YYYY.MM.dd}"
   }
  }
} 

In this scenario all the messages are discarded and we can't find them into an index (tagged as failed). We can't debug this configuration.

Marcello


(Magnus Bäck) #6

Without knowing what your messages look like we can't help you debug it.


(Marcello A) #7

The messages received are all forwarded via syslog from a logspout instance and we can have apache access log entries or exceptions like this:

2017-08-18 13:32:13,998 DEBUG [http-nio-8080-exec-6] testcharts [test.java:78] Error in test: 
java.lang.NullPointerException: null
        at it.company.mycards.view.test.doPost(test.java:61)
        at it.company.mycards.view.test.doGet(test.java:31)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:635)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:742)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
        at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
        at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:478)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80)
        at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:624)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342)
        at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:799)
        at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
        at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:861)
        at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1455)
        at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
        at java.lang.Thread.run(Thread.java:748)

(Magnus Bäck) #8

Yes, but that message is wrapped in a syslog payload isn't it? What does an actual syslog payload look like? I'm pretty sure the multiline codec is applied to the payload itself rather than the parsed message that the grok expression called internally by the syslog filter extracts.


(Marcello A) #9

If i disable the negate to false and I keep alive the message for all the messages I noticed this scenario:

message: <14>1 2017-08-18T14:02:06Z 0fe037477a82 mycard-dev 25523 - - at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:624)

syslog_message: at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:624)

It seems catched correctly from syslog input, but the multiline doesn't create entries into the index with the items catched, because I expect to see some grokparsefailure message in case of issue into the index.

Marcello


(Magnus Bäck) #10

Right, that's what the real payload looks like. So your multiline configuration needs to take the syslog prefix into account and join with the previous line unless it begins with something like this:

<14>1 2017-08-18T14:02:06Z 0fe037477a82 mycard-dev 25523 2017-08-18 13:32:13,998

(Marcello A) #11

could it be fine with a configuration like this?

input {
  syslog {
    type => "docker-dev-support"
    host => "0.0.0.0"
    port => 6000
    codec => multiline {
      pattern => "^<%{NUMBER}>%{NUMBER} %{TIMESTAMP_ISO8601}"
      negate => true
      what => "previous"
    }
  }
}

filter {
  if [type] == "docker-dev-support" {
    #grok {
    #  patterns_dir => ["/etc/logstash/patterns/"]
    #  match => { "message" => "%{DOCKERSYSLOG}" }
    #}
    #if "_grokparsefailure_sysloginput" in [tags] {
    #  mutate {
    #   remove_tag => [ "_grokparsefailure_sysloginput" ]
    #  }
    #}
    #mutate {
    #  remove_field => [ "message" ]
    #}
  }
}

output {
  if [type] == "docker-dev-support" {
   elasticsearch {
     action => "index"
     codec => "plain"
     hosts => [ "http://localhost:9200" ]
     index => "logstash-docker-test-dev-support-%{+YYYY.MM.dd}"
   }
  }
}

(Marcello A) #12

I think that this is the final configuration:

input {
  syslog {
    type => "docker-dev-support"
    host => "0.0.0.0"
    port => 6000
    codec => multiline {
      pattern => "^<%{NUMBER}>%{NUMBER} %{TIMESTAMP_ISO8601} %{NOTSPACE} %{NOTSPACE} %{NUMBER} - - (%{TIMESTAMP_ISO8601}|%{IPORHOST}|\[%{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{YEAR}\])"
      negate => true
      what => "previous"
    }
  }
}

filter {
  if [type] == "docker-dev-support" {
    grok {
      patterns_dir => ["/etc/logstash/patterns/"]
      match => { "message" => "%{DOCKERSYSLOG}" }
    }
    if "_grokparsefailure_sysloginput" in [tags] {
      mutate {
        remove_tag => [ "_grokparsefailure_sysloginput" ]
      }
    }
    mutate {
      remove_field => [ "message" ]
    }
  }
}

output {
if [type] == "docker-dev-support" {
elasticsearch {
action => "index"
codec => "plain"
hosts => [ "http://localhost:9200" ]
index => "logstash-docker-test-dev-support-%{+YYYY.MM.dd}"
}
}
}

I need to check if the logstash wait a new closing line to store the last rows processed into the multiline pipeline.


(Marcello A) #13

We have resolved during the weekend with this final configuration :

input {
  syslog {
    type => "docker-dev-support"
    host => "0.0.0.0"
    port => 6000
    codec => multiline {
      pattern => "^<%{NUMBER}>%{NUMBER} %{TIMESTAMP_ISO8601} %{NOTSPACE} %{NOTSPACE} %{NUMBER} - - (%{TIMESTAMP_ISO8601}|%{IPORHOST}|\[%{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{YEAR}\])"
      negate => true
      what => "previous"
    }
  }
}

filter {
  if [type] == "docker-dev-support" {
    grok {
      patterns_dir => ["/etc/logstash/patterns/"]
      match => { "message" => "%{DOCKERSYSLOG}" }
    }
    if "_grokparsefailure_sysloginput" in [tags] {
      mutate {
        remove_tag => [ "_grokparsefailure_sysloginput" ]
      }
    }
    if "multiline" in [tags] {
      mutate {
        gsub => [ "syslog_message", "^<.*>.* \- \-", "" ]
      } 
    }
    mutate {
      remove_field => [ "message" ]
    }
  }
}

output {
  if [type] == "docker-dev-support" {
   elasticsearch {
     action => "index"
     codec => "plain"
     hosts => [ "http://localhost:9200" ]
     index => "logstash-docker-test-dev-support-%{+YYYY.MM.dd}"    
   }
}

thanks for all the support,
Marcello


(system) #14

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.