Logs files multi line filter (ASCII)

HI, I would like to find out if logstash can process multiple lines that are below each other? Here is an example of a log:
{
1=352621443221
2=3525945678
3=20140225132715
4=345
7=0
8=
9=
10=
11=ABC-1
13=0
16=0
23=20140225132952
40=350953203616
41=356401660
134=
135=
142=267341000053
215=
216=
220=751236
223=0
225=
226=
227=85015
229=
236=
237=0
239=file.dat
243=
224=54862231580161
253=0
374=0
385=0
510=
1314=0
337=
993=265873610053
228=85469
1328=266894230053
1500=358549620
1329=268546932153
21=-1
1370=
1315=323456660
}

The logs starts with the "{" and ends with the "}"

Yes, you can probably use a multiline codec to join these lines. The configuration you're looking for is "unless the line begins with an opening or closing brace, join the current line with the previous line".

Yes the logs has an opening "{" to indicate this is the log beginning and a "}" to show it is ending. This continues for every log record in the log file. so if there are 100 logs a log file each one of the 100 logs start with "{" and ends with "}". how would you define the "{" and "}" as deliminator for such a file?

This might work:

input {
  file {
    path => ...
    codec => multiline {
      pattern => "^\{"
      what => "previous"
      negate => true
    }
  }
}

In other words, if the current line doesn't begin with an opening brace, join with the previous line.

Thank you for the example, so the files regarding brackets look like this:
{
.....
.....
.....
}
{
.....
.....
.....
}

And so it continues for the entire file, so it is only required to input the pattern with the opening bracket?

Is it still required to also have a filter section for this file?

I have tried the following config however it does not produce any output:
input {
file {
type => "DRIN"
path => [ "/ar*DRIN" ]
codec => multiline {
pattern => "^{"
what => "previous"
negate => true
}
}
}

filter {
grok {
match => ["message", "%{GREEDYDATA:kvdata}"]
}
kv {
field_split => " "
value_split => "="
source => "kvdata"
remove_field => "kvdata"
}
date {
locale => "en"
match => ["3", "yyyyMMddHHmmss", "ISO8601"]
timezone => "Africa/Windhoek"
target => "@timestamp"
add_field => { "debug" => "timestampMatched"}
}

}

output {
elasticsearch {
hosts => "localhost:9200"
index => "%{logstash}-%{+YYYY.MM.dd}"
}
}

And so it continues for the entire file, so it is only required to input the pattern with the opening bracket?

Yes. Understanding how the multiline codec is supposed to work is essential.

Is it still required to also have a filter section for this file?

If you want to parse the data further, yes.

I have tried the following config however it does not produce any output:

That's because Logstash is tailing the file. Read about sincedb in the file input documentation and study the start_position, sincedb_path, and ignore_older options.

Thank you very much I got it to work, much appreciated the assistance.

I have one more question regarding the fields, the initial field descriptor is a number e.g.

{
1=352621443221
2=3525945678
.......

so the leading numbers have a description, how do I add these as descriptors e.g.
1 is Calling_Number
2 is Called_Number
......

I thought of using the translate plugin however then I have to have one for each field as there are around 1000 of them. Your assistance will be appreciated.

I don't see how you could get around listing all 1000 possible values in a table somewhere.

Ok, so a rename list would most probably do the trick, correct?
e.g.
mutate {
rename => { "1" => "Calling_Number" }
rename => { "2" => "Called_Number" }
}

Yes, if the translate filter can't help out.