The field value seem rather long. I think it might be over the default length of for ignore_above and thus not indexed for search. If you try to put a shorter document in that (usually below 256 chars), would that be found containing that string?
If so, you should most likely adjust the ignore_above value for that field in that index via the mapping for that index, if you know it will contain long values.
I did a reindex of .kibana to tmp, then I deleted the .kibana index, then used the mappings API to change all ignore_above from 256 to 2048, then checked the tmp index to verify the change, then I reindexed tmp back to .kibana, then checked my new .kibana index, and the values are back to 256!
Reindex forces ignore_above back to 256. That can't possibly be by design can it?
yeah if you always applied the Standard analyzer (that includes the lower case token filter) it will be lowercased.
You can check if you have a message.keyword or message.raw field with the same name in the index, that would contain the unanalyzed (and thus not lower cased) value in the field for you to regex on.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.