Hi,
My logstash config is:
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash.crt"
ssl_key => "/etc/pki/tls/private/logstash.key"
}
}
filter {
grok {
match => {
"message" => [
"[(?(%{MONTHNUM}/%{MONTHDAY}/%{YEAR})\s+%{TIME}\s+%{WORD})]\s+%{BASE16NUM:ThreadID}\s+(?([\w|\S]+))\s+%{WORD:LogLevel}\s+(?[\w|\W](?(SR[A-Za-z\d][\d]+))[\W]+[\w|\W])",
"[(?(%{MONTHNUM}/%{MONTHDAY}/%{YEAR})\s+%{TIME}\s+%{WORD})]\s+%{BASE16NUM:ThreadID}\s+(?([\w|\S]+))\s+%{WORD:LogLevel}\s+(?[\w|\W](\n)+(?(SR[A-Za-z\d][\d]+))(\n)+[\w|\W])"
]
}
remove_field => ["message"]
}
if "_grokparsefailure" in [tags] {
grok {
match => ["message", "[(?(%{MONTHNUM}/%{MONTHDAY}/%{YEAR})\s+%{TIME}\s+%{WORD})]\s+%{BASE16NUM:ThreadID}\s+(?([\w|\S]+))\s+%{WORD:LogLevel}\s+(?[\w|\W])"]
remove_field => ["message"]
remove_tag => ["_grokparsefailure"]
add_field => {
SRNumber => "-"
}
}
}
if "_grokparsefailure" in [tags] {
grok {
match => ["message", "[(?(%{MONTHNUM}/%{MONTHDAY}/%{YEAR})\s+%{TIME}\s+%{WORD})]\s+%{BASE16NUM:ThreadID}\s+%{WORD:LogLevel}\s+(?[\w|\W])"]
remove_field => ["message"]
remove_tag => ["_grokparsefailure"]
add_field => {
SRNumber => "-"
LogSOurce => "-"
}
}
}
if "_grokparsefailure" in [tags] {
grok {
match => ["message", "(?[\w|\W]+)"]
remove_field => ["message"]
remove_tag => ["_grokparsefailure"]
add_tag => ["ignore"]
add_field => {
LogSource => "-"
LogLevel => "-"
SRNumber => "-"
LogTime => "-"
ThreadID => "-"
}
}
}
if "SWIS" in [fields][ServerType] {
date {
match => ["Logtime", "M/d/yy HH:mm:ss:SSS z"]
timezone => "GMT"
}
} else {
date {
match => ["Logtime", "M/d/yy HH:mm:ss:SSS z"]
timezone => "UTC"
}
}
}
output {
elasticsearch {
hosts => "IP"
index => "logstash-site-%{+YYYY.MM.dd}"
flush_size => 50
}
}
After increasing the number of pipeline workers from 4 to 8, logstash had been working well for about 10 days. It's a good progress compared to before it only worked well for less than 1 days.
Is anything wrong with my configuration? If yes, any suggestions for that? This issue has bothered my for a long time, I'll be appreciated for your help.