I have the following fieldnames in index (verified in kibana):
python .\bin\elkq2.py --getmapping --index='bps-trace-ttl_7d-8.16.1-rancher2-2025.01.08'
fields: @timestamp,@version,agent.ephemeral_id,agent.id,agent.name,agent.type,
agent.version,beat.name,beat.version,bpplatform,bps.application,container.id,
container.labels.docklogbeat_document_ttl,ecs.version,event,
fields.buypass.com/compliance,fields.buypass.com/component,
fields.buypass.com/environment,fields.buypass.com/managed-by,
fields.buypass.com/name,fields.buypass.com/part-of,fields.buypass.com/regulation,
fields.buypass.com/release,fields.buypass.com/site,fields.buypass.com/sla,
fields.buypass.com/team,fields.buypass.com/tier,fields.buypass.com/zoneClass,
fields.cid,fields.docklogbeat_document_ttl,fields.docklogbeat_document_type,
fields.log_type,fields.namespace,fields.path,fields.pod,host.name,input.type,
log.file.device_id,log.file.inode,log.flags,log.offset,message,source,tags
But if I press try ES|QL
in kibana, the ones with '/' in are removed from available fields and if I try the following (same error in kibana):
python .\bin\elkq2.py --fields="@timestamp,@version,fields.buypass.com/team"
--index='bps-trace-ttl_7d-8.16.1-rancher2-2025.01.08'
query: FROM bps-trace-ttl_7d-8.16.1-rancher2-2025.01.08
| LIMIT 10
| KEEP @timestamp,@version,fields.buypass.com/team
| SORT @timestamp desc
elasticsearch.BadRequestError: BadRequestError(400, 'parsing_exception', "line 1:106: token recognition error at: '/t'")
If I remove fields.buypass.com/team
it is all fine.
How is this related, we have tons of fields that have the slash('/')
Are there some tokenizer I can use, or a rule to ignore fields.