I use Elasticsearch 5.6.2 ; logstash 5.6.2 ; filebeat 5.6.2
This is how my filebeat looks like-
filebeat.prospectors:
- input_type: log
paths:
- C:\Users\madhur_yadav\Documents\My Received Files\project177\ss_log*.log
document_type: ss_log
multiline.pattern: '^\s*[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true
multiline.match: after
- input_type: log
paths:
- C:\Users\madhur_yadav\Documents\My Received Files\project177\ss_cloud_out_log*.log
document_type: ss_cloud_out_log
multiline.pattern: '^\s*[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true
multiline.match: after
- input_type: log
paths:
- C:\Users\madhur_yadav\Documents\My Received Files\project177\event.log
document_type: acp_event
- input_type: log
paths:
- C:\Users\madhur_yadav\Documents\My Received Files\project177\alert.log
document_type: acp_alerts
- input_type: log
paths:
- C:\Users\madhur_yadav\Documents\My Received Files\project177\vr_log*.log
document_type: vr_log
- input_type: log
paths:
- C:\Users\madhur_yadav\Documents\My Received Files\project177\SM*.log
- C:\Users\madhur_yadav\Documents\My Received Files\project77\SMGC*.log
document_type: xs_sm_log
- input_type: log
paths:
- C:\Users\madhur_yadav\Documents\My Received Files\project177\Vpxa*.log
document_type: vpxa_log
multiline.pattern: '^\s*[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true
multiline.match: after
- input_type: log
paths:
- C:\Users\madhur_yadav\Documents\My Received Files\project177\vmkernel*.log
document_type: vmkernel_log
- input_type: log
paths:
- C:\Users\madhur_yadav\Documents\My Received Files\project177\vmkwarning*.log
document_type: vmkwarning_log
- input_type: log
paths:
- C:\Users\madhur_yadav\Documents\My Received Files\project177\vmksummary*.log
document_type: vmksummary_log
- input_type: log
paths:
- C:\Users\madhur_yadav\Documents\My Received Files\project177\inventory*.log
document_type: inventory_data
#================================ General =====================================
fields_under_root: true
fields:
logstash-forwarder-token: $some_forwarder_token_number_that_can't_be_disclosed $
project_id: $some_project_id_that_can't_be_disclosed$
project_timezone: America/Los_Angeles
#----------------------------- Logstash output --------------------------------
output.logstash:
hosts: ["localhost:5044"]
bulk_max_size: 4096
timeout: 300
****************___________
My logstash looks like this -
input {
beats {
port => "5044"
client_inactivity_timeout => "120"
ssl => true
ssl_certificate => $path to ssl_certificate $
ssl_key =>$path to ssl_key $
}
}
filter {
ruby {
code => "$LOAD_PATH.unshift(File.expand_path('C:\Users\madhur_yadav\Documents\Newfolder\logstash-5.6.2\vendor\jruby\lib\ruby\gems\shared\gems\jwt-1.5.6\lib', __FILE__))
require 'jwt'
is_token_valid = 'false'
begin
token = event.get('logstash-forwarder-token')
rovius_id = event.get('rovius_id')
decoded_token = JWT.decode token, 'Eg=A}k!3]VL`d{*`#dDZd4=*', 'HS256'
decoded_rovius_id = decoded_token[0]['prou']
if decoded_rovius_id == rovius_id
is_token_valid = 'true'
else
is_token_valid = 'false'
end
rescue JWT::ExpiredSignature => exception
is_token_valid = 'false'
puts '== ERROR 1000.0 OCCURRED =='
puts exception.backtrace => exception
rescue JWT::DecodeError => exception
is_token_valid = 'false'
puts '== ERROR 1000.1 OCCURRED =='
puts exception.backtrace
rescue => exception
is_token_valid = 'false'
puts '== ERROR 1000.2 OCCURRED =='
puts exception.backtrace
if is_token_valid == 'false'
event.cancel
end
end"
}
mutate { remove_field => [ "logstash-forwarder-token" ] }
if [type] not in ["acp_alerts", "acp_event", "inventory_data"]{
grok
{
match => [
"source","\.(?<source_entity>.+)\.%{YEAR}-%{MONTHNUM}-%{MONTHDAY}.%{ISO8601_TIMEZONE:tz_offset}?"
]
}
}
if [message] == "null" or [message] == "" {
drop { }
}
......GROK PATTERNS.............
output {
...........................
..........................
.......................
}
My problem:
I run 3 filebeats simultaneously using only one logstash. It runs smoothly for 2-3 hours but after that it starts to show errors -
>
> [2018-01-19T15:13:07,391][ERROR][logstash.filters.ruby ] Ruby exception occurred: Detected invalid array contents due to unsynchronized modifications with concurrent users
> [2018-01-19T15:13:07,395][ERROR][logstash.filters.ruby ] Ruby exception occurred: Detected invalid array contents due to unsynchronized modifications with concurrent users
> [2018-01-19T15:13:07,398][ERROR][logstash.filters.ruby ] Ruby exception occurred: Detected invalid array contents due to unsynchronized modifications with concurrent users
I'm using 4 pipeline workers while running logstash. If I increase the pipeline worker number to 5 or more it crashes immediately.
I've tried to update ruby filter plugin, but it was of no use.I'm stuck with this for 2 weeks now and i desperately need help.any solution, suggestion, workaround or any kind of lead would be highly appreciated.