Hi guys,
I am facing an issue where i am trying to perform an airthmetic operation using ruby but in at the field section i am getting the same value not the subtracted value.
event.set('[d]', (event.get('[b]').to_f) - (event.get('[a]').to_f))
a = 5, b =4
So, when you have a message with messageType equals to AReq, you will have the field Areq_Sec, but you will not have the field Ares_Sec, and when the messageType is ARes, you will have Ares_Sec, but not Areq_Sec.
Your ruby code will always have one of the terms equals to zero.
Can you provide more context about your message and what you want to achieve?
I am getting the values perfectly for Areq_Sec and Ares_Sec. The problem I am facing is while doing the airthmetic operation using the ruby filter. It's not giving me the desired output.
For example :-
Its giving me the output like
Areq_Sec = 4, Ares_Sec = 5
c = b - a
desired output = 1
Your ruby filter seems to be trying to calculate the difference between Areq_Sec and Ares_Seq. I guess these are timings or timestamps for requests and responses, and you want to find how long a request took.
There are multiple issues here.
The ruby filter tries to subtract [Areq_Sec] from [Ares_Sec] to find the difference between the timing of the request and response. However, those fields are in different events, and unless you configure a filter that combines them, each event is independent. If a field does not exist then the event.get will return nil and .to_f will return zero. This is why your two events get 0-4 (-4) and 5-0 (5).
In your third filter section, for each [messageType] you try to do mutate+convert as well as mutate+add_field. You cannot convert a field that you add in the same mutate. The order of mutate operations is documented, but nobody tells you that event decoration (add_field/tag, remove_field/tag), if done, is always done last.
Doing the [Areq_sec] to [Areq_Sec] copy and convert has no obvious purpose. You are doing a .to_f in the ruby filter, so it is not required in logstash (but I understand that the systems you are stashing data in may require it).
If your messages all have a transaction id then it should be fairly simple to do an aggregate filter that merges these field for log entries that refer to the same request.
I tried with the aggregate filter but I am unable to find the desired output. There's a problem while using the aggregate filter. A single transaction_id contains multiple request and response. I want to calculate the request and response for two events like Areq and Ares. In aggregate it will calculate the seconds for the whole request and response for that particular transaction_id. Same with the elapsed filter
If you use pipeline.workers 1 and pipline.ordered it may be possible to do that, but without seeing the log file or a very detailed specification of it by you we cannot guess how.
You could try this. Note that when doing a numeric conversion in grok the suffix to the field name has to be ":int", not ":INT". You must set pipeline.workers to 1 and pipeline.ordered must evaluate to true in order to use aggregate.
grok{
pattern_definitions => { "customDate" => "%{YEAR}-%{MONTHNUM}-%{INT}\s%{HOUR}:%{MINUTE}:%{SECOND}\s%{WORD}" }
match => { "message"=> "%{customDate:[@metadata][ts]}\s%{DATA:Transaction_Number}\s%{LOGLEVEL:LOGLEVEL}\s+%{JAVACLASS:error_file}\s%{DATA:hash_type}TransactionId\s-\s%{NUMBER:TransactionId:int}+,+\sTransactionReferenceId\s-\s%{NUMBER:TransactionReferenceId:int}+]+%{GREEDYDATA:[@metadata][msgbody]}" }
}
mutate { gsub => [ "[@metadata][ts]", "MUT", "Etc/GMT-4" ] }
date { match => [ "[@metadata][ts]", "YYYY-MM-dd HH:mm:ss ZZZ" ] }
json { source => "[@metadata][msgbody]" }
aggregate {
task_id => "%{TransactionId}"
code => '
begin
type = event.get("messageType")
# type1 will be A/C/R/WS OPT
type1 = type.gsub(/Re[qs].*/, "").strip
# type2 will be an array containg Req or Res
type2 = type.match(/Re[qs]/)
if type2[0] == "Req"
map[type1] = event.get("@timestamp").to_f
else
event.set("req_sec", event.get("@timestamp").to_f - map[type1])
end
rescue
end
'
}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.