Logstash JDBC string column value to array data type

I would like to provide a SQL column value that contains concatenated strings (i.e. "value1,value2,value3") and have this value 'split' into an array data type within ElasticSearch.

I've looked at the 'join' mutator and the 'split' function but I can't seem to get it right.

I am currently using a mutator to convert lat/lon to geo_point in another configuration and the syntax makes sense as a whole; I have not been as successful with the above use case, however.

Seems I would need to split and then join. Am I adding new fields as I go and then later removing the temporary fields?

Does anyone have an example of how I might go about this?

Thank you.

I'm not quite following. You want an array in Elasticsearch? Then the mutate filter's split option is the way to go. I don't get why you'd want to join anything. Showing a concrete example (copy/paste, no screenshots please) of exactly what you have and what you'd like the resulting JSON document to look like would be helpful.

Thanks for the response. Out of brevity, I will provide some in-complete code in regards to the JDBC aspect of my Logstash config and focus on the filter/split portion. Thanks for your assistance.

In my template's mapping, let's say I have a property called "files" which is defined as an array.

input {
jdbc {
statement => "select file1 || ',' || file2 as files_str from some_table"
}

}
filter {
split {
# I would like to split the "files_str" column value into ab array for storage as "files"
# Do I add a field called "files" first?
}
}
output {
elasticsearch {
index => "file-data-%{YYY.MM.dd}"
document_type => "file"
hosts => ["myhost"]
}
}

No, don't use the split filter. Use the mutate filter's split option. It'll split the string in place so you don't need a temporary field.

...
    statement => "select file1 || ',' || file2 as files from some_table"
...

filters {
  mutate {
    split {
      "files" => ","
    }
  }
}

Much appreciated; I didn't realize that 'split' within the mutate context was what I needed. Thanks again.