Filebeat don't send send proper multiline messages on logstash

Hello,

I have a problem with sending multiline log messages to Logstash using Filebeat. This is my Filebeat.yml configuration:

filebeat.inputs:

- type: log
  paths:
    - "/home/mladen/Desktop/qmraz2.log"
  fields:
    qmraz2: true
  fields_under_root: true

  ### Multiline options

  # Mutiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  multiline.pattern: '^AMQ8409'

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  multiline.negate: true

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  multiline.match: after 

  multiline.flush_pattern: 5

This is sample of my log:

AMQ8409: Display Queue details.
   QUEUE(SYSTEM.CLUSTER.REPOSITORY.QUEUE)
   TYPE(QLOCAL)                            CURDEPTH(2)
AMQ8409: Display Queue details.
   QUEUE(SYSTEM.DURABLE.SUBSCRIBER.QUEUE)
   TYPE(QLOCAL)                            CURDEPTH(1)
AMQ8409: Display Queue details.
   QUEUE(SYSTEM.HIERARCHY.STATE)           TYPE(QLOCAL)
   CURDEPTH(2)

I have tried with file output (messages are processing ok), this is sample:

AMQ8409: Display Queue details.\n QUEUE(SYSTEM.CLUSTER.REPOSITORY.QUEUE)\n TYPE(QLOCAL) CURDEPTH(2)

Also, I have created Ingest pipeline and use grok processor and everything work as expected.

To see what is happening for grok filter I put GREEDYDATA, this is my pipeline:

input {
    beats {
        port => "5044"
    }
}

filter {
    if [qmraz2] {
        grok {
            match => { "message" => "%{GREEDYDATA}" }
        }
    }  
}

output {

if [qmraz2] {
        if "_grokparsefailure" in [tags] {
            # write events that didn't match to a file
            file { "path" => "/grok/kaiibraz/grok_qmraz2_failures_kaiibraz.txt" }
        }

        else {
            elasticsearch {
                hosts => [ "127.0.0.1:9200" ]
                index => "iibrazqmraz2-%{+YYYY.MM}"
            }
        }

    }

}

In Elasticsearch I get following messages:

AMQ8409: Display Queue details.
   QUEUE(KMQ.IRA.AGENT.QUEUE.5B45A835215E4116)
   TYPE(QLOCAL)                            CURDEPTH(1)

On production I have Filebeat version 6.2.2 and Logstash version 6.3.1. To be sure that I don’t have any other filter that collide with this one I have created isolated environment and reproduced same problem on Filebeat version 6.3.1 and Logstash 6.3.1 version.

So, if I didn't miss something for some reason I get nonparse multiline messages only when output is Logstash.

Does anyone have a clue what I am doing wrong?

BR,
Mladen

Not sure I fully understand the problem. You are talking about some grok pattern but there is only GREEDYDATA.

What is the expected data structure you expect in Elasticsearch? Also what is exactly wrong with the above message? I probably miss something here.

Hello @ruflin,

thank you for your promptly replay. I try to be less confusing :slight_smile:

I created this Logstash pipeline to receive messages from filebeat, transform messages and send to Elasticsearch:

input {
    beats {
        port => "5044"
    }
}

filter {
    if [qmraz2] {
        grok {
            match => { "message" => "^AMQ8409: Display Queue details.\\n%{SPACE}QUEUE\(%{NOTSPACE:queue_name}\)(\\n)?%{SPACE}TYPE\(QLOCAL\)(\\n)?%{SPACE}CURDEPTH\(%{NUMBER:curdepth:int}\)(\\n)?%{SPACE}$" }
        }
    }    
}

output {
    if [qmraz2] {
        if "_grokparsefailure" in [tags] {
            # write events that didn't match to a file
            file { "path" => "/grok/kaiibraz/grok_qmraz2_failures_kaiibraz.txt" }
        }

        else {
            elasticsearch {
                hosts => [ "127.0.0.1:9200" ]
                index => "iibrazqmraz2-%{+YYYY.MM}"
            }
        }
    }
}

Unfortunately all messages goes to grok_qmraz2_failures_kaiibraz.txt. First, I was thinking that the problem is grok filter but after testing every message line from grok failure file on test pipeline with same grok filter I release that I have problem with lines that come from filebeat.

Correct me if I'm wrong but should line in Elasticsearch should be in one line because of filebeat transformation? I did't use multiline filter in filebeat before so probably I mess up something.

BR,
Mladen

Hello @ruflin,

I found out what was the problem. Problem was nonexistent \n (new line) in my grok filter when output is processed through logstash. My assumption was that this symbol should exist here as this is the case when output is saved in the file.

BR,
Mladen

Glad to hear. Thanks for sharing the solution.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.