It might be easier to do it using ruby. Is there any way you can show us an example document? Either from stdout { codec => rubydebug } or from the JSON from Discover in Kibana.
File is as posted above
Seperate java objects
{"id":"123", "array1":["val1", "val2"], "array2":["val3", "val4"]}
{"id":"234", "array1":["val11", "val22"], "array2":["val3", "val4"]}
{"id":"345", "array1":["val71", "val22"], "array2":["val3", "val4"]}
and instead I want to have an array of objects
{"name1": [{"id":"123", "array1":["val1", "val2"], "array2":["val3", "val4"]},
{"id":"234", "array1":["val11", "val22"], "array2":["val3", "val4"]},
{"id":"345", "array1":["val71", "val22"], "array2":["val3", "val4"]}]}
What I have tried
include a new name-value pair with the same value
{"index":"sw", "id":"123", "array1":["val1", "val2"], "array2":["val3", "val4"]}
{"index":"sw","id":"234", "array1":["val11", "val22"], "array2":["val3", "val4"]}
{"index":"sw","id":"345", "array1":["val71", "val22"], "array2":["val3", "val4"]}
then try and aggregate on this
filter {
....
mutate {
aggregate => {
task_id => %{index}
map['id'] => event.get('id')
map['array1'] => event.get('array1')
map['array2'] => event.get('array2')
}
}
....
}
If I understand that right, you put an aggregate filter inside of a mutate filter and the did not wrap the code for the aggregation in code => "". It looks pretty chaotic, to be honest...
On review, I think that I am misusing/abusing logstash. Logstash is for streams of data, hence why I'm getting a series of json objects.
I shouldn't trying to do what I'm doing as I'm trying to take ALL of my logstash events and put them in a json string. Which goes against the logstash use of data streams.
Thanks for the replies.
Mods - you can close this now if you wish.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.