Settings Configuration for Tokenizers/Analyzers

Hello,

I'm looking to set up the tokenizers/analyzers (so as I get the
default behavior that comes with Lucene ie. lowercase, standard,
stopwords) using the Java API.

However, I'm noticing, as with the rest of the Java API, documentation
is very lackluster. Also, it's confusing how I should be building the
low level json to do this.

Why is setting this up in the configuration not a good idea?

Can anyone provide any kind of examples on how to do this using the
Java API?

Thanks!

Ronak Patel

May be this would help you.

On Nov 14, 5:50 am, Ronak Patel ronak2...@gmail.com wrote:

Hello,

I'm looking to set up the tokenizers/analyzers (so as I get the
default behavior that comes with Lucene ie. lowercase, standard,
stopwords) using the Java API.

However, I'm noticing, as with the rest of the Java API, documentation
is very lackluster. Also, it's confusing how I should be building the
low level json to do this.

Why is setting this up in the configuration not a good idea?

Can anyone provide any kind of examples on how to do this using the
Java API?

Thanks!

Ronak Patel