Also, try switching the length and word_delimiter filters around for different results. However the above would only work, if w.1000 was the only string in your field, which I am not sure, it is?
The standard tokenizer would already have tokenized and potentally removed the w.1000 even before the word delimiter filter would be able to kick in. You could try playing around with a different tokenizer like whitespace or classic and see if that helps as well.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.