ES Plugin to extend Lucene's Standard Tokenizer

Hi all,

Recently, I've been working on an extension to Lucene's Standard Tokenizer
that allows the user to customize / override the default word boundary
break rules for Unicode characters. The Standard Tokenizer implements the
word break rules from the Unicode Text segmentation
http://www.unicode.org/reports/tr29/ algorithm where most punctuation
symbols (except for underscore '_') are treated as hard word breaks (e.g.
"@foo" , "#foo" are tokenized to "foo"). While the Standard Tokenizer works
great in most cases, I found that being unable to override the default word
break rules was quite limiting especially since a lot of these punctuation
symbols have important meaning now on the web (@ - mentions, # - hashtags,
etc.)

I've wrapped this extension to the Standard Tokenizer in an ElasticSearch
plugin, which can be found at -
https://github.com/bbguitar77/elasticsearch-analysis-standardext ...
definitely looking for feedback as this is my first go at an ElasticSearch
plugin!

I'm hoping other ElasticSearch / Lucene users find this helpful.

Cheers!
Bryan

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/929dc7c3-ff99-43a4-a287-1a8f89d86e3f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hello Bryan ,

Congrats on your first plugin.
I have a question here - Can you implement the whole plugin by using

tokenizer ?

Is your plugin providing any advantage over going this approach ?

Thanks
Vineeth

On Tue, Sep 9, 2014 at 7:56 AM, Bryan Warner bryan.warner@gmail.com wrote:

Hi all,

Recently, I've been working on an extension to Lucene's Standard Tokenizer
that allows the user to customize / override the default word boundary
break rules for Unicode characters. The Standard Tokenizer implements the
word break rules from the Unicode Text segmentation
http://www.unicode.org/reports/tr29/ algorithm where most punctuation
symbols (except for underscore '_') are treated as hard word breaks (e.g.
"@foo" , "#foo" are tokenized to "foo"). While the Standard Tokenizer works
great in most cases, I found that being unable to override the default word
break rules was quite limiting especially since a lot of these punctuation
symbols have important meaning now on the web (@ - mentions, # - hashtags,
etc.)

I've wrapped this extension to the Standard Tokenizer in an Elasticsearch
plugin, which can be found at -
GitHub - bbguitar77/elasticsearch-analysis-standardext ...
definitely looking for feedback as this is my first go at an Elasticsearch
plugin!

I'm hoping other Elasticsearch / Lucene users find this helpful.

Cheers!
Bryan

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/929dc7c3-ff99-43a4-a287-1a8f89d86e3f%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/929dc7c3-ff99-43a4-a287-1a8f89d86e3f%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAGdPd5koucm5_kWjrVqs%3DEQm8fySnuM81i_j3KGUKi3NafaPpQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Also congrats for writing a plugin!

For alternative punctuation tokenizing, you can also look at the "classic"
tokenizer, which is the behavior of Lucene standard tokenization before
3.1, when it switched to Unicode text segmentation.

Best,

Jörg

On Tue, Sep 9, 2014 at 5:36 AM, vineeth mohan vm.vineethmohan@gmail.com
wrote:

Hello Bryan ,

Congrats on your first plugin.
I have a question here - Can you implement the whole plugin by using
Elasticsearch Platform — Find real-time answers at scale | Elastic
tokenizer ?

Is your plugin providing any advantage over going this approach ?

Thanks
Vineeth

On Tue, Sep 9, 2014 at 7:56 AM, Bryan Warner bryan.warner@gmail.com
wrote:

Hi all,

Recently, I've been working on an extension to Lucene's Standard
Tokenizer that allows the user to customize / override the default word
boundary break rules for Unicode characters. The Standard Tokenizer
implements the word break rules from the Unicode Text segmentation
http://www.unicode.org/reports/tr29/ algorithm where most punctuation
symbols (except for underscore '_') are treated as hard word breaks (e.g.
"@foo" , "#foo" are tokenized to "foo"). While the Standard Tokenizer works
great in most cases, I found that being unable to override the default word
break rules was quite limiting especially since a lot of these punctuation
symbols have important meaning now on the web (@ - mentions, # - hashtags,
etc.)

I've wrapped this extension to the Standard Tokenizer in an Elasticsearch
plugin, which can be found at -
GitHub - bbguitar77/elasticsearch-analysis-standardext ...
definitely looking for feedback as this is my first go at an Elasticsearch
plugin!

I'm hoping other Elasticsearch / Lucene users find this helpful.

Cheers!
Bryan

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/929dc7c3-ff99-43a4-a287-1a8f89d86e3f%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/929dc7c3-ff99-43a4-a287-1a8f89d86e3f%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5koucm5_kWjrVqs%3DEQm8fySnuM81i_j3KGUKi3NafaPpQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5koucm5_kWjrVqs%3DEQm8fySnuM81i_j3KGUKi3NafaPpQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHDGY%3DV12TgJEmZL9_snJH7DOpQ2KT6%3DCWqRe8q2sU0GA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Hi Vineeth,
I haven't looked at the plugin Bryan has created ,
However creating a plugin for special characters gives better performance
over patter tokenizer or custom filters.
Regards,
Raj

On Tuesday, September 9, 2014 9:06:08 AM UTC+5:30, vineeth mohan wrote:

Hello Bryan ,

Congrats on your first plugin.
I have a question here - Can you implement the whole plugin by using
Elasticsearch Platform — Find real-time answers at scale | Elastic
tokenizer ?

Is your plugin providing any advantage over going this approach ?

Thanks
Vineeth

On Tue, Sep 9, 2014 at 7:56 AM, Bryan Warner <bryan....@gmail.com
<javascript:>> wrote:

Hi all,

Recently, I've been working on an extension to Lucene's Standard
Tokenizer that allows the user to customize / override the default word
boundary break rules for Unicode characters. The Standard Tokenizer
implements the word break rules from the Unicode Text segmentation
http://www.unicode.org/reports/tr29/ algorithm where most punctuation
symbols (except for underscore '_') are treated as hard word breaks (e.g.
"@foo" , "#foo" are tokenized to "foo"). While the Standard Tokenizer works
great in most cases, I found that being unable to override the default word
break rules was quite limiting especially since a lot of these punctuation
symbols have important meaning now on the web (@ - mentions, # - hashtags,
etc.)

I've wrapped this extension to the Standard Tokenizer in an Elasticsearch
plugin, which can be found at -
GitHub - bbguitar77/elasticsearch-analysis-standardext ...
definitely looking for feedback as this is my first go at an Elasticsearch
plugin!

I'm hoping other Elasticsearch / Lucene users find this helpful.

Cheers!
Bryan

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/929dc7c3-ff99-43a4-a287-1a8f89d86e3f%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/929dc7c3-ff99-43a4-a287-1a8f89d86e3f%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/dd2fd3b0-f6c1-40e0-b2d7-723084027354%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

You can definitely use the Pattern Tokenizer to define your own token
separators (i.e. the word boundary breaks), but you will add complexity and
lose some of the benefits of the StandardTokenizer.

First, regarding complexity, if you want certain characters to not become
token separators, you're regex will start becoming increasingly complex and
lengthy ... I know from experience as I have used the Pattern Tokenizer
previously.

Second, you will lose all the benefits of the StandardTokenizer and how it
handles international characters / symbols and special characters outside
the ASCII range of Unicode characters (e.g. Left-to-right markers, etc.).
It's probably impossible to capture all that the StandardTokenizer does in
a regex expression

If you are dealing with very clean data and perhaps English-only text, then
the Pattern Tokenizer could be a viable solution. But, when dealing with
web / user-generated data and many languages, then the StandardTokenizer is
your friend.

  • Bryan

On Monday, September 8, 2014 11:36:08 PM UTC-4, vineeth mohan wrote:

Hello Bryan ,

Congrats on your first plugin.
I have a question here - Can you implement the whole plugin by using
Elasticsearch Platform — Find real-time answers at scale | Elastic
tokenizer ?

Is your plugin providing any advantage over going this approach ?

Thanks
Vineeth

On Tue, Sep 9, 2014 at 7:56 AM, Bryan Warner <bryan....@gmail.com
<javascript:>> wrote:

Hi all,

Recently, I've been working on an extension to Lucene's Standard
Tokenizer that allows the user to customize / override the default word
boundary break rules for Unicode characters. The Standard Tokenizer
implements the word break rules from the Unicode Text segmentation
http://www.unicode.org/reports/tr29/ algorithm where most punctuation
symbols (except for underscore '_') are treated as hard word breaks (e.g.
"@foo" , "#foo" are tokenized to "foo"). While the Standard Tokenizer works
great in most cases, I found that being unable to override the default word
break rules was quite limiting especially since a lot of these punctuation
symbols have important meaning now on the web (@ - mentions, # - hashtags,
etc.)

I've wrapped this extension to the Standard Tokenizer in an Elasticsearch
plugin, which can be found at -
GitHub - bbguitar77/elasticsearch-analysis-standardext ...
definitely looking for feedback as this is my first go at an Elasticsearch
plugin!

I'm hoping other Elasticsearch / Lucene users find this helpful.

Cheers!
Bryan

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/929dc7c3-ff99-43a4-a287-1a8f89d86e3f%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/929dc7c3-ff99-43a4-a287-1a8f89d86e3f%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/06e1a147-e7b3-43b3-ab9a-6e4bbef4a63f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.