Error while updating nested fields using Logstash mutate

Hi All,

I am trying to load from a SQL to Elastic using Logstash pipeline. I am trying to add new nested document using mutate, but it is overwriting existing one.

Here is my existing JSON document:

 "empid" : 12345,
 "joined_date" : "2018-12-19",
"created_dts" : "2001-03-29T07:01:01.000Z",
"projectInfo" : [
            {
            "projId":123,
            "projName":"ABC",
            "projStart":"2018-12-19"
           }
]

Here is the mutate block I have written:

mutate {
    add_field => {"[projectInfo][projId]" => "%{projId}"}
 add_field => {"[projectInfo][projName]" => "%{projName}"}
 add_field => {"[projectInfo][projStart]" => "%{projStart}"}
                } 

This mutate block should add a new project to the employee. Instead it is replacing the existing one. Please suggest

If you are saying you have an event that has [projId], [projName], and [projStart] fields and you want to have a [projectInfo] array like

"projectInfo" : [
            {
            "projId":123,
            "projName":"ABC",
            "projStart":"2018-12-19"
           }
]

then it should be

mutate {
    add_field => {
        "[projectInfo][0][projId]" => "%{projId}"
        "[projectInfo][0][projName]" => "%{projName}"
        "[projectInfo][0][projStart]" => "%{projStart}"
    }
} 

If you want to append an entry to the array in logstash then I think you need to use a ruby filter. It is not clear what you are trying to do.

1 Like

Thanks @Badger for the response!

All I want to do is to not update existing nested document, but append to the existing nested documents. But the data is getting appended.

Employee was allocated to project 123, and later point I want to allocate him/her to project 456 too, without disturbing allocation to 123..

If you want to append an entry to an array in elasticsearch when a different set of projectInfo entries are processed by logstash that is probably an elasticsearch question. It may be possible to do it using a scripted upsert (I do not know), otherwise you would have to fetch the document from elasticsearch, possibly using an elasticsearch filter, merge the additional information (probably requiring a ruby filter) and then send it back to elasticsearch.