Index and search "à" char

I would like to index a document with a field like this:
{ "word" : "beltà"}
and then be able to query it for exact matching.
I'm taking the word from a file with filebeat, performing some aggregation with logstash and then indexing it in elasticsearch.
In kibana I see a '?' with black background instead of the 'à' char and a query which should match "beltà" gives me no results.
Should I set a specific analyzer/tokenizer to cope with this?
Thank you fo your attention

You first need to make sure that you are using UTF8.
Then you can use an asciifolding token filter if you want to be able to search for a or à.


Thank you.
I'm trying with an utf8 text file created with notepad++.
I've tried both encoding plain and utf-8 with filebeat, but the black '?' is still there in kibana.
What should I do to be sure that I'm using utf8?

Edit: What shoud I do if I have a file which is not utf8 encoded?

I found this on internet:

iconv -t UTF-8 YourFile.txt


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.