Hello everybody
I have a transform job that writes the following fields from one index into a new index like so:
Group By:
ap.mac_address_normalized
Aggregated fields:
ap.hostname -> raw1.ap.hostname
ap.mac_address -> raw2.ap.mac_address
ap.radio_mac_address -> raw3.ap.radio_mac_address
This is the json of my transform job:
{
"id": "ap_lookup",
"authorization": {
"roles": [
"superuser"
]
},
"version": "10.0.0",
"create_time": 1753697007593,
"source": {
"index": [
"logs-network-devices.log-default"
],
"query": {
"match_all": {}
}
},
"dest": {
"index": "ap_lookup",
"pipeline": "ap-lookup-flatten-fields"
},
"frequency": "60s",
"sync": {
"time": {
"field": "@timestamp",
"delay": "60s"
}
},
"pivot": {
"group_by": {
"ap.mac_address_normalized": {
"terms": {
"field": "ap.mac_address_normalized"
}
}
},
"aggregations": {
"raw1": {
"top_metrics": {
"metrics": [
{
"field": "ap.hostname"
}
],
"sort": {
"@timestamp": "desc"
}
}
},
"raw2": {
"top_metrics": {
"metrics": [
{
"field": "ap.mac_address"
}
],
"sort": {
"@timestamp": "desc"
}
}
},
"raw3": {
"top_metrics": {
"metrics": [
{
"field": "ap.radio_mac_address"
}
],
"sort": {
"@timestamp": "desc"
}
}
}
}
},
"settings": {}
}
Afterwards I want to flatten the fields out to be just a keyword / no object.
raw1.ap.hostname -> hostname
raw2.ap.mac_address -> mac_address
raw3.ap.radio_mac_address -> radio_mac_address
The problem is that my ingest pipeline does nothing when I configure a rename processor that looks like this (this is just for mac_address but it should be the same for the others):
[
{
"rename": {
"field": "raw2.ap.mac_address",
"target_field": "mac_address",
"ignore_failure": true
}
}
]
I have also tried a Set Processor with same result:
[
{
"set": {
"field": "mac_address",
"copy_from": "raw2.ap.mac_address",
"ignore_failure": true
}
}
]
The mappings of the new index looks like this:
In Dev Tools with the command GET ap_lookup/_mapping it looks like this:
{
"ap_lookup": {
"mappings": {
"_meta": {
"created_by": "transform",
"_transform": {
"transform": "ap_lookup",
"version": {
"created": "10.0.0"
},
"creation_date_in_millis": 1753703871655
}
},
"properties": {
"ap": {
"properties": {
"mac_address_normalized": {
"type": "keyword"
}
}
},
"pipeline_test": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"raw1": {
"properties": {
"ap": {
"properties": {
"hostname": {
"type": "keyword"
}
}
}
}
},
"raw2": {
"properties": {
"ap": {
"properties": {
"mac_address": {
"type": "keyword"
}
}
}
}
},
"raw3": {
"properties": {
"ap": {
"properties": {
"radio_mac_address": {
"type": "keyword"
}
}
}
}
}
}
}
}
}
And this is a json from one document (I just have changed the values due to compliance reasons). For me the structure here looks different as for example ap.mac_address is on one level and does not look like an object:
{
"_index": "ap_lookup",
"_id": "IFGFIQFE121083jjfoef",
"_version": 1,
"_score": 0,
"_source": {
"raw1": {
"ap.hostname": "AP-01"
},
"raw3": {
"ap.radio_mac_address": "AB12.CD34.EF56"
},
"ap": {
"mac_address_normalized": "AB12CD34EF56"
},
"raw2": {
"ap.mac_address": "1ED1.2DC3.34ED"
}
},
"fields": {
"raw2.ap.mac_address": [
"1ED1.2DC3.34ED"
],
"raw1.ap.hostname": [
"AP-01"
],
"ap.mac_address_normalized": [
"AB12CD34EF56"
],
"raw3.ap.radio_mac_address": [
"AB12.CD34.EF56"
]
}
}
I assured, that the pipeline is working as expected as I created a test field via a set processor that creates a field pipeline_test with the value success. That worked so I know that the pipeline is used.
I also had the feeling, that the issue might be that the new field has the same name as the underlying field of the object (raw2.ap.mac_address <> mac_address) but i also tried different names for the target field like just address or mac without success.
It feels like that it can not find the field raw2.ap.mac_address although the mapping is defined as such. Here I also just tried ap.mac_address and mac_address.
I am fairly new to Elastic so please have mercy I hope I explained it well enough. Please let me know if you need more details. Thank you very much in advance!