Hi all,
I am developing a filesearch solution using fscrawler to push data into ES 5.5.0
the issue I am facing is a fields explosion (it's crossing 1000 fields in no time) as fscrawler creates new fields dynamically, 95% of which are "meta.*" fields.
details of the issue are here if anyone is interested : Filesearch solution using ES 5.5.0
the solution I can see is using the remove processor in ingest node to get rid of the meta.* fields.
I have tried directly using remove on meta.* fields but that throws a javalang exception.
The only way around seems to be using script processor to extract the meta.* fields and then using the remove to get rid of them.
thing is, I have no experience of this kind of thing. how do I access the fields in ingest node in the first place ?
any pointers would be much appreciated.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.