I'm looking to set up the tokenizers/analyzers (so as I get the
default behavior that comes with Lucene ie. lowercase, standard,
stopwords) using the Java API.
However, I'm noticing, as with the rest of the Java API, documentation
is very lackluster. Also, it's confusing how I should be building the
low level json to do this.
Why is setting this up in the configuration not a good idea?
Can anyone provide any kind of examples on how to do this using the