Hi All
These question has been asked multiple times in forum as i see but no definite answer.
I saw one of the Elastic member answered in a nice way, but it is not working for me.
I have a json data in this format coming from a input value
undefined method `keys' for nil:NilClass suggests that [@metadata][data] is nil, meaning the json filter did not parse. I suggests removing all the @s from @metadata and seeing what you get in stdout { codec => rubydebug }
I find that the split fails. items-array looks like this:
and that gets you "exception"=>"undefined method `empty?' for 3:Fixnum" when it tries to split the first value in the array. That's fixed in the most recent (3.1.7) version of the filter.
Once that is fixed. items-array only has one entry for data, so the split filter has no effect and all the renames fail.
What i am looking for is a universal solution for all JSON data from input. I do not want to use split filter to go over multiple records.
Split filter is single threaded, when used with high volume data lets say over 100K, it creates clones and then split.
I want to use ruby to identify all the key/values
on the above example
I upgraded the filter to 3.1.7
Removed all @ from metadata, still i get same error
Ruby exception occurred: undefined method `keys' for nil:NilClass
also i observed,i am not getting any target at all from the json filter.
Do you have any working example on Ruby Filter with iterate over the keys,so that i can get a flat json and then split in one go
Want to try something similar suggested by @yaauie
Hi @Badger, is there any other way wherein i can convert 1 event in message of json data into multiple events without using the split filter? ( especially with ruby filter)
Please let me know
You said elsewhere that you have 80,000 bytes of JSON. Unless the objects are tiny, I expect that is around 1,000 entries in the array. You also say it takes 4 hours, which is over 10,000 seconds. So over 10 seconds per event. Looking at the loop in the split filter, it is hard to imagine how that could be taking 10 seconds.
So, can try running with "--log.level debug", which will cause it to log a line for each entry in the array
Then review the timestamps on those events. In particular, how much of the time is spent in the input and json parsing, and are the events spit out by split regularly spaced, or are there long gaps in the output.
The one which i mentioned in this thread,i am experimenting with a small toy dataset.
Which is this one ( https://reqres.in/api/users)
Actually i have a bigger dataset which is having almost 15 columns and 80,000 records. From that forum discussion thread it is clear that because of split filter, it goes into serial mode and take 4 hrs.
In order to avoid split filter, i started using ruby and to see if anything i can do to achieve some result.
My ultimate aim is to get immediate result in output, at least for those events which has done processing in filter and should be available immediately in ouput, rather than waiting in queue to complete first split filter then do other operations in serial mode. Which is not acceptable, Logstash is a streaming tool so it should work somehow.
OK, so if you have a piece of JSON that contains 80,000 records, each of which has 15 columns, then it might be 10 MB. And you are going to create 80,000 copies of that, which involves allocate 800 GB of memory (or possibly 1.6 TB if you have a copy of the message in addition to the parsed data). Oh, and this is Java, so I guess every char is two bytes, so perhaps 3.2 TB of memory to be allocated and GC'd. I'm not surprised it takes a long time.
may avoid a second copy of the 10 MB on each event (it it is present -- I'm not sure what you get from an http_poller with a codec).
Inside the split filter, it is cloning the event here, which creates a new 10 or 20 MB object, and the next line replaces that with something that only takes a couple of hundred bytes. That's a really expensive way of doing it. We need something more like the UNIX system call vfork, if you are familiar with that.
Instead of cloning the event, just set event_split to an empty new event and copy over the fields you need like timestamp, host, version, etc. (still doing event_split.set(@target, value) etc.). Then yield that.
However, the details of replacing the clone with an empty new event are beyond me. You need someone who understand a little more about events than I do.
The following may work. In a file called splitData.rb put
def register(params)
@field = params['field']
@target = params['target']
end
def filter(event)
data = event.get(@field)
event.remove(@field)
a = []
data.each { |x|
e = event.clone
e.set(@target, x)
a << e
}
a
end
The critical point is to remove the data field before call event.clone.
It occurred to me that the split filter ought to be able to do this optimization (remove source before cloning if it is going to be overwritten). Looking at the code it appears that this line may be trying to do this. However I don't know what target refers to (not @target, which is never nil) so I am not sure what it does
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.