I have data that looks like this:
{
"somerandomkey" : {
"field1" : "hello",
"field2" : 123
},
"anotherrandomkey" : {
"field1" : "world",
"field2" : 321
}
}
The keys before each object are not relevant to me. Is there a way to define a mapping so that elasticsearch either:
- ignores the key and treats the values as a stream?
- or, alternatively, creates a field from the key, adding it to the object?
The latter variant would transform this:
"somerandomkey" : {
"field1" : "hello",
"field2" : 123
}
into this:
{
"field1" : "hello",
"field2" : 123,
"key" : "somerandomkey"
}
Using jq
, I can do the former by applying jq -ncr --stream 'fromstream(1|truncate_stream(inputs))
input.json` yielding:
{
"field1" : "hello",
"field2" : 123
},
{
"field1" : "world",
"field2" : 321
}
An option to get the data into the latter form is jq 'with_entires(.value += {key}) | .[]'
, yielding:
{
"field1" : "hello",
"field2" : 123
"key" : "somerandomkey"
}
The question is: do I need jq
for that, or can elasticsearch do that for me?