Data are cut into logstash


(Sergey) #1

Hello,

I collect logs from 40 devices.
Logs are read by filebeat from a text file, filebeat also adds a field with the device name (dev_id).
The contents of the file looks something like this:

23:53:15 SOME TEXT
******************
00424183 27-03-17 20:08:28
SOME TEXT
4..8181 (Some Text)
xxxxx/xxxxxxxxxxxxx 15.00 XXX
C1: 5; C2: 3; C3: 3; C4: 2
******************
*434*01:06:52 GO IN SERVICE COMMAND
22:15:33 SOME TEXT
22:14:32 SOME TEXT
21:04:03 SOME TEXT
21:00:11 -> SOME TEXT

From time to time in elasticsearch inserts records without a field offset, dev_id, in the field _type value% {[metadata] [type]} is inserted.
More often it happens on lines like *923* 00:04:57 SOME TEXT, but sometimes on others

config file:

input {
beats {
port => 5044
codec => multiline {
auto_flush_interval => 15
pattern => "^%{TIME} "
negate => true
what => next
}
}
}
filter {
grok {
match => ["message", "%{TIME:transaction_time}"]
}
grok {
match => ["source", "%{POSINT:transaction_date}"]
}
mutate {
add_field => {"timestamp => "%{transaction_date} %{transaction_time}"}
}
date {
match => ["timestamp", "YYYYMMdd HH:mm:ss"]
target => "@timestamp"
}

    mutate {
      remove_field => ["year", "tags", "beat", "source", "type", "input_type", "timestamp", "transaction_date", "transaction_time"]

    }

}


(Mark Walkom) #2

You should use the mutiline functionality at the beat side, not ES.


(system) #3

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.