We are researching our ability to remove Logstash and instead have Filebeats send directly to Elasticsearch's new ingest pipelines.
I am trying to take the following Logstash filter and create an ingest pipeline.
grok {
match => ["message", "^:%{DATESTAMP:timestamp} %{DATA:thread} %{LOGLEVEL:log_level} +%{DATA:log_name} (%{DATA:agent_id} )?session_id:(%{DATA:sid})? time_id:(%{DATA:time_id})? referrer:(%{DATA:referrer})? %{GREEDYDATA:msg}$"]
overwrite => ["timestamp"]
add_tag => ["application_log", "%{type}"]
}
if [msg] =~ /.*Publishing following data to SNS topic.*/ {
grok { match => ["msg", "(?<sns_queue>Publishing[^{]+) %{GREEDYDATA:snsMsg}"] }
json {
source => "snsMsg"
target => "parsedMsg"
}
}
if [msg] =~ /^.+Exception|Error.+$/ {
mutate {
add_tag => ["error"]
}
}
mutate {
remove_field => "snsMsg"
}
date {
match => ["timestamp", "MM-dd-yyyy HH:mm:ss.SSS"]
}
So far all I have is:
{
"description": "Application Logs",
"processors": [
{
grok {
"field": "message",
"patterns": ["^:%{DATESTAMP:timestamp} %{DATA:thread} %{LOGLEVEL:log_level} +%{DATA:log_name} (%{DATA:agent_id} )?session_id:(%{DATA:sid})? time_id:(%{DATA:time_id})? referrer:(%{DATA:referrer})? %{GREEDYDATA:msg}$"]
}
]
}
I have no clue how to handle the overwrite, add_tag, and add_field items. Can the append processor be used for the add_tag?
A point in the right direction would be very much appreciated.