I have a Logstash pipeline which is processing JSON data from a flat file. The data is somewhat structured like this:
{
"name": "job1",
"tasks": {
"75fc": {
"name": "restAction",
"variables": {
"incoming": {
"params": {
"path": "/restconf/data/thefile",
"accept": "application/yang-data+json",
"method": "PUT",
"contentType": "application/yang-data+json"
},
"body": {
"cust-eplan:cust-eplan": [
{
"customer-name": "Decostumer",
"device": [{
"device-name": "my-dvice",
"interface": {
"GigabitEthernet-iosxr": [
{
"interface-id": "0/0/0/32",
"encapsulation": "default",
"mep-id": 3
}
]
}
},
{
"device-name": "my-device2",
"interface": {
"GigabitEthernet-iosxr": [
{
"interface-id": "0/0/0/32",
"encapsulation": "default",
"mep-id": 4
}
]
}
}
]
}]
}
}
}
}
},
"error": [
{
"task": "75fc",
"message": {
"ietf-restconf:errors": {
"error": [{
"error-type": "application",
"error-tag": "malformed-message",
"error-path": "/pathto/problem",
"error-message": "missing element: name in thepath"
}]
}
},
"timestamp": 1.695060733555E+12
},
{
"task": "job",
"message": "Job has no available transitions. 6649, cb04, f5dd could have led to the workflow end task, but did not. These tasks performed in a way that the end of the workflow could not be reached.",
"timestamp": 1.69506073357E+12
}
]
}
I need to extract the array of [error][task] values, then use this array to extract the matching array of [tasks]["taskId"][variables][incoming] values, then add a field called errored_task_incoming_variables or something like that.
So for example, if the JSON data has [error][task] values of ["75fc", "75fd", 75fe"], the pipeline should then fetch these values:
[tasks]["75fc"][variables][incoming]
[tasks]["75fd"][variables][incoming]
[tasks]["75fe"][variables][incoming]
Is the ruby filter the only option to accomplish my goal? Or can I use a combination of the aggregate and json filters? Or do I need to use all 3?