Hi,
I'm facing an issue with one of our fields in Elasticsearch, I'm using FSCrawler to extract content and then ingest in Elasticsearch, however we've few large size files where the content extracted exceed 100,000 chars, so, elastic is truncating up to this number. I see attachment plugin has this setting: "indexed_chars" BUT I'm NOT using attachment plugin, I'm using pure FSCrawler to go directly to elastic, is there a setting or how I can setup the index/field to accept more than +100,000 chars.
The pdf I'm trying to extract is searchable from page 1 to the end (page +700), this discard that the pdf is not wrong or not searchable.
Any direction/suggestion is highly appreciated.
Regards,
Christian