If you have an idea for improving the documentation for the tokenizer can you open an issue? I'd certainly be happy to review it.
Both UNICODE_CHAR_CLASS and UNICODE_CHARACTER_CLASS should work. What error are you seeing?
If you have an idea for improving the documentation for the tokenizer can you open an issue? I'd certainly be happy to review it.
Both UNICODE_CHAR_CLASS and UNICODE_CHARACTER_CLASS should work. What error are you seeing?
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.