Parsing DB data processes the fields as analyzed. Is there a way to change it?

Hi,

I am connecting to a database in Logstash, querying a table, and inserting the data in ES. But the fields are all analyzed. Is there a way out for me to allow the strings as not_analyzed? Although I see the full values in Discover mode in Kibana, I do not see the entire field values in legends or tooltips either.

Best regards,

Update the index template that you're using. See options for this in the elasticsearch output plugin documentation.

Hi,

Thank you! This is what I have in the template -

{
"template": "logdb_test",
"settings": {
"analysis": {
"analyzer": {
"analyzer_path_tokens": {
"type": "custom",
"tokenizer": "path_hierarchy"
}
}
}
},
"mappings": {
"logdb_test": {
"dynamic_templates": [
{ "my_dtemplate": {
"match_mapping_type": "string",
"mapping": {
"type": "string",
"analyzer": "not_analyzed",
"index": "not_analyzed",
"doc_values": true
}
}
}
]
}
}
}

Am I missing something?

And what is the actual mapping of the index? Did you update the template prior to the creation of the index?

Yes, I set up the template before I create the index. It does pick up the template name, but the fields coming from the database show up under a different mapping under Database, like
"Database": {
"properties": {
"pathlevel2": {
"type": "string"
},
"pathlevel1": {
"type": "string"
},
"pathlevel4": {
"type": "string"
},

I am not sure why that could be happening.

Looks like your template applies to documents of type logdb_test but the actual documents have the type Database.

Thank you! Yes, I missed out on that. I made a change to the config file and trying it again.

Best regards,

Hi Magnus,

Yes, it was the "type" specified in my jdbc filter. It is all sorted now. I am sorry for the hassle.

Thank you for your patience and support.

Best regards,