Searching indexed fields without analysing


(Chris H-2) #1

Hi. I've deployed elasticsearch with logstash and kibana to take in
Windows logs from my OSSEC log server, following this guide:
http://vichargrave.com/ossec-log-management-with-elasticsearch/
I've tweaked the logstash config to extract some specific fields from the
logs, such as User_Name. I'm having some issues searching on these fields
though.

These searches work as expected:

  • User_Name: *
  • User_Name: john.smith
  • User_Name: john.*
  • NOT User_Name: john.*

But I'm having problems with Computer accounts, which take the format
"w-dc-01$" - they're being split on the "-" and the "$" is ignored. So a
search for "w-dc-01" returns all the servers named "w-". Also I
can't do "NOT User_Name: *$" to exclude computer accounts.

The mappings are created automatically by logstash, and GET
/logstash-2014.01.08/_mapping shows:

"User_Name": {

"type": "multi_field",
"fields": {
"User_Name": {
"type": "string",
"omit_norms": true
},
"raw": {
"type": "string",
"index": "not_analyzed",
"omit_norms": true,
"index_options": "docs",
"include_in_all": false,
"ignore_above": 256
}
}
},

My (limited) understanding is that the "not_analyzed" should stop the field
being split, so that my searching matches the full name, but it doesn't.
I'm trying both kibana and curl to get results.

Hope this makes sense. I really like the look of elasticsearch, but being
able to search on extracted fields like this is pretty key to me using it.

Thanks.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/62e3ebfc-aaa3-4af0-b93e-d4454146607b%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Chris H-2) #2

Hi, a bit more information.

I tried adding a custom analyzer based off a recommendation I saw online
somewhere. This partly works in that it's not tokenising. But I can't do
wildcard searches in Kibana on the fields, and they're now case sensitive :frowning:

curl localhost:9200/_template/logstash-username -XPUT -d '{
"template": "logstash-*",
"settings" : {
"analysis": {
"analyzer": {
"lc_analyzer": {
"type": "custom",
"tokenizer": "keyword",
"filters": ["lowercase"]
}
}
}
},
"mappings": {
"default": {
"properties" : {
"User_Name" : { "type" : "string", "analyzer" :
"lc_analyzer" }
}
}
}
}'

Thanks

On Wednesday, January 8, 2014 3:26:03 PM UTC, Chris H wrote:

Hi. I've deployed elasticsearch with logstash and kibana to take in
Windows logs from my OSSEC log server, following this guide:
http://vichargrave.com/ossec-log-management-with-elasticsearch/
I've tweaked the logstash config to extract some specific fields from the
logs, such as User_Name. I'm having some issues searching on these fields
though.

These searches work as expected:

  • User_Name: *
  • User_Name: john.smith
  • User_Name: john.*
  • NOT User_Name: john.*

But I'm having problems with Computer accounts, which take the format
"w-dc-01$" - they're being split on the "-" and the "$" is ignored. So a
search for "w-dc-01" returns all the servers named "w-". Also I
can't do "NOT User_Name: *$" to exclude computer accounts.

The mappings are created automatically by logstash, and GET
/logstash-2014.01.08/_mapping shows:

"User_Name": {

"type": "multi_field",
"fields": {
"User_Name": {
"type": "string",
"omit_norms": true
},
"raw": {
"type": "string",
"index": "not_analyzed",
"omit_norms": true,
"index_options": "docs",
"include_in_all": false,
"ignore_above": 256
}
}
},

My (limited) understanding is that the "not_analyzed" should stop the
field being split, so that my searching matches the full name, but it
doesn't. I'm trying both kibana and curl to get results.

Hope this makes sense. I really like the look of elasticsearch, but being
able to search on extracted fields like this is pretty key to me using it.

Thanks.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/96e74e53-54f9-48ec-9e5c-8f1354b264be%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Jun Ohtani) #3

Hi Chris,

Could you try to escape “-“ in query for “not_analyzed” field?

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html#_reserved_characters

I hope this helps.
Regards,


Jun Ohtani
johtani@gmail.com
blog : http://blog.johtani.info
twitter : http://twitter.com/johtani

2014/01/09 17:20、Chris H chris.hembrow@gmail.com のメール:

Hi, a bit more information.

I tried adding a custom analyzer based off a recommendation I saw online somewhere. This partly works in that it's not tokenising. But I can't do wildcard searches in Kibana on the fields, and they're now case sensitive :frowning:

curl localhost:9200/_template/logstash-username -XPUT -d '{
"template": "logstash-*",
"settings" : {
"analysis": {
"analyzer": {
"lc_analyzer": {
"type": "custom",
"tokenizer": "keyword",
"filters": ["lowercase"]
}
}
}
},
"mappings": {
"default": {
"properties" : {
"User_Name" : { "type" : "string", "analyzer" : "lc_analyzer" }
}
}
}
}'

Thanks

On Wednesday, January 8, 2014 3:26:03 PM UTC, Chris H wrote:
Hi. I've deployed elasticsearch with logstash and kibana to take in Windows logs from my OSSEC log server, following this guide: http://vichargrave.com/ossec-log-management-with-elasticsearch/
I've tweaked the logstash config to extract some specific fields from the logs, such as User_Name. I'm having some issues searching on these fields though.

These searches work as expected:
• User_Name: *
• User_Name: john.smith
• User_Name: john.*
• NOT User_Name: john.*
But I'm having problems with Computer accounts, which take the format "w-dc-01$" - they're being split on the "-" and the "$" is ignored. So a search for "w-dc-01" returns all the servers named "w-". Also I can't do "NOT User_Name: *$" to exclude computer accounts.

The mappings are created automatically by logstash, and GET /logstash-2014.01.08/_mapping shows:

"User_Name": {

"type": "multi_field",
"fields": {
"User_Name": {
"type": "string",
"omit_norms": true
},
"raw": {
"type": "string",
"index": "not_analyzed",
"omit_norms": true,
"index_options": "docs",
"include_in_all": false,
"ignore_above": 256
}
}
},
My (limited) understanding is that the "not_analyzed" should stop the field being split, so that my searching matches the full name, but it doesn't. I'm trying both kibana and curl to get results.

Hope this makes sense. I really like the look of elasticsearch, but being able to search on extracted fields like this is pretty key to me using it.

Thanks.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/96e74e53-54f9-48ec-9e5c-8f1354b264be%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Chris H-2) #4

Hi, Jun.

That doesn't seem to work. For a user with the username bob.smith-jones:

  • bob.smith-jones -> matches
  • bob.smith-aaaa -> matches
  • bob.smi* -> matches
  • bob.smith-j* -> no results
  • bob.smith-j* -> no results

Also, a "$" isn't one of the special characters.

Thanks.

On Thursday, January 9, 2014 8:52:46 AM UTC, Jun Ohtani wrote:

Hi Chris,

Could you try to escape “-“ in query for “not_analyzed” field?

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html#_reserved_characters

I hope this helps.
Regards,


Jun Ohtani
joh...@gmail.com <javascript:>
blog : http://blog.johtani.info
twitter : http://twitter.com/johtani

2014/01/09 17:20、Chris H <chris....@gmail.com <javascript:>> のメール:

Hi, a bit more information.

I tried adding a custom analyzer based off a recommendation I saw online
somewhere. This partly works in that it's not tokenising. But I can't do
wildcard searches in Kibana on the fields, and they're now case sensitive
:frowning:

curl localhost:9200/_template/logstash-username -XPUT -d '{
"template": "logstash-*",
"settings" : {
"analysis": {
"analyzer": {
"lc_analyzer": {
"type": "custom",
"tokenizer": "keyword",
"filters": ["lowercase"]
}
}
}
},
"mappings": {
"default": {
"properties" : {
"User_Name" : { "type" : "string", "analyzer" :
"lc_analyzer" }
}
}
}
}'

Thanks

On Wednesday, January 8, 2014 3:26:03 PM UTC, Chris H wrote:
Hi. I've deployed elasticsearch with logstash and kibana to take in
Windows logs from my OSSEC log server, following this guide:
http://vichargrave.com/ossec-log-management-with-elasticsearch/
I've tweaked the logstash config to extract some specific fields from
the logs, such as User_Name. I'm having some issues searching on these
fields though.

These searches work as expected:
• User_Name: *
• User_Name: john.smith
• User_Name: john.*
• NOT User_Name: john.*
But I'm having problems with Computer accounts, which take the format
"w-dc-01$" - they're being split on the "-" and the "$" is ignored. So a
search for "w-dc-01" returns all the servers named "w-". Also I
can't do "NOT User_Name: *$" to exclude computer accounts.

The mappings are created automatically by logstash, and GET
/logstash-2014.01.08/_mapping shows:

"User_Name": {

"type": "multi_field",
"fields": {
"User_Name": {
"type": "string",
"omit_norms": true
},
"raw": {
"type": "string",
"index": "not_analyzed",
"omit_norms": true,
"index_options": "docs",
"include_in_all": false,
"ignore_above": 256
}
}
},
My (limited) understanding is that the "not_analyzed" should stop the
field being split, so that my searching matches the full name, but it
doesn't. I'm trying both kibana and curl to get results.

Hope this makes sense. I really like the look of elasticsearch, but
being able to search on extracted fields like this is pretty key to me
using it.

Thanks.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/96e74e53-54f9-48ec-9e5c-8f1354b264be%40googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/22cf533e-eab8-468b-9b9a-55bbe12b3d62%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Brian Yoder) #5

Chris,

I updated one of my tests to reproduce your issue. My text field is a
multi-field where text.na is the text field without any analysis at all.

This Lucene query does not find anything at all:

{
"bool" : {
"must" : {
"query_string" : {
"query" : "text.na:Immortal-Li*"
}
}
}
}

But this one works fine:

{
"bool" : {
"must" : {
"prefix" : {
"text.na" : {
"prefix" : "Immortal-Li"
}
}
}
}
}

And returns the two documents that I expected:

{ "_index" : "mortal" , "_type" : "elf" , "_id" : "1" , "_version" : 1 ,
"_score" : 1.0 , "_source" :
{ "cn" : "Celeborn" , "text" : "Immortal-Lives forever" } }

{ "_index" : "mortal" , "_type" : "elf" , "_id" : "2" , "_version" : 1 ,
"_score" : 1.0 , "_source" :
{ "cn" : "Galadriel" , "text" : "Immortal-Lives forever" } }

Note that in both cases, the query's case must match since the field value
is not analyzed at all.

I'm not sure if this is a true bug. In general, I find Lucene syntax
somewhat useful for ad-hoc queries, and I find their so-called Simple Query
Parser syntax to be completely unable to find anything when there is no
_all field, whether or not I specify a default field. (But that's another
issue I'm going to ask about in the near future.)

Brian

On Thursday, January 9, 2014 8:27:04 AM UTC-5, Chris H wrote:

Hi, Jun.

That doesn't seem to work. For a user with the username bob.smith-jones:

  • bob.smith-jones -> matches
  • bob.smith-aaaa -> matches
  • bob.smi* -> matches
  • bob.smith-j* -> no results
  • bob.smith-j* -> no results

Also, a "$" isn't one of the special characters.

Thanks.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6cb908eb-9ca7-4f05-815f-a868c45f9f66%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Jun Ohtani) #6

Hi Chris,

I recreate your issue to the following gist.

And I try to change query as follows:

User_Name.raw:bob.smith-jones -> matches
User_Name.raw:bob.smi* -> matches
User_Name.raw:bob.smith-j* -> matches
User_Name.raw:bob.smith-j* -> matches

I use User_Name.raw field instead of User_Name.

Sorry, not necessary to escape…

And I don’t know why do not work Brian example’s query_string query…

Does it make sense?
Is this understanding mistaken?


Jun Ohtani
johtani@gmail.com
blog : http://blog.johtani.info
twitter : http://twitter.com/johtani

2014/01/10 2:09、InquiringMind brian.from.fl@gmail.com のメール:

Chris,

I updated one of my tests to reproduce your issue. My text field is a multi-field where text.na is the text field without any analysis at all.

This Lucene query does not find anything at all:

{
"bool" : {
"must" : {
"query_string" : {
"query" : "text.na:Immortal-Li*"
}
}
}
}

But this one works fine:

{
"bool" : {
"must" : {
"prefix" : {
"text.na" : {
"prefix" : "Immortal-Li"
}
}
}
}
}

And returns the two documents that I expected:

{ "_index" : "mortal" , "_type" : "elf" , "_id" : "1" , "_version" : 1 , "_score" : 1.0 , "_source" :
{ "cn" : "Celeborn" , "text" : "Immortal-Lives forever" } }

{ "_index" : "mortal" , "_type" : "elf" , "_id" : "2" , "_version" : 1 , "_score" : 1.0 , "_source" :
{ "cn" : "Galadriel" , "text" : "Immortal-Lives forever" } }

Note that in both cases, the query's case must match since the field value is not analyzed at all.

I'm not sure if this is a true bug. In general, I find Lucene syntax somewhat useful for ad-hoc queries, and I find their so-called Simple Query Parser syntax to be completely unable to find anything when there is no _all field, whether or not I specify a default field. (But that's another issue I'm going to ask about in the near future.)

Brian

On Thursday, January 9, 2014 8:27:04 AM UTC-5, Chris H wrote:
Hi, Jun.

That doesn't seem to work. For a user with the username bob.smith-jones:
• bob.smith-jones -> matches
• bob.smith-aaaa -> matches
• bob.smi* -> matches
• bob.smith-j* -> no results
• bob.smith-j* -> no results
Also, a "$" isn't one of the special characters.

Thanks.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6cb908eb-9ca7-4f05-815f-a868c45f9f66%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Brian Yoder) #7

If it helps, here are my index settings and mappings. Note that I chose the
name text.na as the non-analyzed form, not text.raw. Perhaps I should
follow convention? But for now, a rose by any other name is still not
analyzed:

{
"settings" : {
"index" : {
"number_of_shards" : 1,
"refresh_interval" : "1s",
"analysis" : {
"char_filter" : { },
"filter" : {
"english_snowball_filter" : {
"type" : "snowball",
"language" : "English"
}
},
"analyzer" : {
"english_stemming_analyzer" : {
"type" : "custom",
"tokenizer" : "standard",
"filter" : [ "standard", "lowercase", "asciifolding",
"english_snowball_filter" ]
},
"english_standard_analyzer" : {
"type" : "custom",
"tokenizer" : "standard",
"filter" : [ "standard", "lowercase", "asciifolding" ]
}
}
}
}
},
"mappings" : {
"default" : {
"dynamic" : "strict"
},
"ghost" : {
"_all" : {
"enabled" : false
},
"_ttl" : {
"enabled" : true,
"default" : "1.9m"
},
"properties" : {
"cn" : {
"type" : "string",
"analyzer" : "english_stemming_analyzer"
},
"text" : {
"type" : "multi_field",
"fields" : {
"text" : {
"type" : "string",
"analyzer" : "english_stemming_analyzer",
"position_offset_gap" : 4
},
"std" : {
"type" : "string",
"analyzer" : "english_standard_analyzer",
"position_offset_gap" : 4
},
"na" : {
"type" : "string",
"index" : "not_analyzed"
}
}
}
}
},
"elf" : {
"_all" : {
"enabled" : false
},
"_ttl" : {
"enabled" : true
},
"properties" : {
"cn" : {
"type" : "string",
"analyzer" : "english_stemming_analyzer"
},
"text" : {
"type" : "multi_field",
"fields" : {
"text" : {
"type" : "string",
"analyzer" : "english_stemming_analyzer",
"position_offset_gap" : 4
},
"std" : {
"type" : "string",
"analyzer" : "english_standard_analyzer",
"position_offset_gap" : 4
},
"na" : {
"type" : "string",
"index" : "not_analyzed"
}
}
}
}
}
}
}

Brian

On Thursday, January 9, 2014 10:38:15 PM UTC-5, Jun Ohtani wrote:

Hi Chris,

I recreate your issue to the following gist.

https://gist.github.com/johtani/8346404

And I try to change query as follows:

User_Name.raw:bob.smith-jones -> matches
User_Name.raw:bob.smi* -> matches
User_Name.raw:bob.smith-j* -> matches
User_Name.raw:bob.smith-j* -> matches

I use User_Name.raw field instead of User_Name.

Sorry, not necessary to escape…

And I don’t know why do not work Brian example’s query_string query…

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/b0aece84-052c-4efc-8a25-1b42850fefe4%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Jun Ohtani) #8

Hi Brian,

Thanks!

I understand that your query does not match anything at all.

“query_string” query is changed automatically query-terms to lower-case in some cases.
i.e. wildcard, prefix, fuzzy…
See : http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html

I change your query as follow :

{
"query": {
"bool" : {
"must" : {
"query_string" : {
"query" : "text.na:Immortal-Li*",
"lowercase_expanded_terms" : false
}
}
}
}
}

Then returns the two documents.

I've learned a great deal!

Regards,


Jun Ohtani
johtani@gmail.com
blog : http://blog.johtani.info
twitter : http://twitter.com/johtani

2014/01/10 13:07、InquiringMind brian.from.fl@gmail.com のメール:

If it helps, here are my index settings and mappings. Note that I chose the name text.na as the non-analyzed form, not text.raw. Perhaps I should follow convention? But for now, a rose by any other name is still not analyzed:

{
"settings" : {
"index" : {
"number_of_shards" : 1,
"refresh_interval" : "1s",
"analysis" : {
"char_filter" : { },
"filter" : {
"english_snowball_filter" : {
"type" : "snowball",
"language" : "English"
}
},
"analyzer" : {
"english_stemming_analyzer" : {
"type" : "custom",
"tokenizer" : "standard",
"filter" : [ "standard", "lowercase", "asciifolding", "english_snowball_filter" ]
},
"english_standard_analyzer" : {
"type" : "custom",
"tokenizer" : "standard",
"filter" : [ "standard", "lowercase", "asciifolding" ]
}
}
}
}
},
"mappings" : {
"default" : {
"dynamic" : "strict"
},
"ghost" : {
"_all" : {
"enabled" : false
},
"_ttl" : {
"enabled" : true,
"default" : "1.9m"
},
"properties" : {
"cn" : {
"type" : "string",
"analyzer" : "english_stemming_analyzer"
},
"text" : {
"type" : "multi_field",
"fields" : {
"text" : {
"type" : "string",
"analyzer" : "english_stemming_analyzer",
"position_offset_gap" : 4
},
"std" : {
"type" : "string",
"analyzer" : "english_standard_analyzer",
"position_offset_gap" : 4
},
"na" : {
"type" : "string",
"index" : "not_analyzed"
}
}
}
}
},
"elf" : {
"_all" : {
"enabled" : false
},
"_ttl" : {
"enabled" : true
},
"properties" : {
"cn" : {
"type" : "string",
"analyzer" : "english_stemming_analyzer"
},
"text" : {
"type" : "multi_field",
"fields" : {
"text" : {
"type" : "string",
"analyzer" : "english_stemming_analyzer",
"position_offset_gap" : 4
},
"std" : {
"type" : "string",
"analyzer" : "english_standard_analyzer",
"position_offset_gap" : 4
},
"na" : {
"type" : "string",
"index" : "not_analyzed"
}
}
}
}
}
}
}

Brian

On Thursday, January 9, 2014 10:38:15 PM UTC-5, Jun Ohtani wrote:
Hi Chris,

I recreate your issue to the following gist.

https://gist.github.com/johtani/8346404

And I try to change query as follows:

User_Name.raw:bob.smith-jones -> matches
User_Name.raw:bob.smi* -> matches
User_Name.raw:bob.smith-j* -> matches
User_Name.raw:bob.smith-j* -> matches

I use User_Name.raw field instead of User_Name.

Sorry, not necessary to escape…

And I don’t know why do not work Brian example’s query_string query…

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email toelasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/b0aece84-052c-4efc-8a25-1b42850fefe4%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Chris H-2) #9

Thanks, everybody. It does look like the issue is with the
"lowercase_expanded_terms". I've also discovered that logstash by default
creates both an analyzed and non-analyzed field, which helps a bit.

However, I've worked around my specific issue (differentiating Windows User
and Computer accounts) in logstash by extracting them into separate fields.

Thanks

On Wednesday, January 8, 2014 3:26:03 PM UTC, Chris H wrote:

Hi. I've deployed elasticsearch with logstash and kibana to take in
Windows logs from my OSSEC log server, following this guide:
http://vichargrave.com/ossec-log-management-with-elasticsearch/
I've tweaked the logstash config to extract some specific fields from the
logs, such as User_Name. I'm having some issues searching on these fields
though.

These searches work as expected:

  • User_Name: *
  • User_Name: john.smith
  • User_Name: john.*
  • NOT User_Name: john.*

But I'm having problems with Computer accounts, which take the format
"w-dc-01$" - they're being split on the "-" and the "$" is ignored. So a
search for "w-dc-01" returns all the servers named "w-". Also I
can't do "NOT User_Name: *$" to exclude computer accounts.

The mappings are created automatically by logstash, and GET
/logstash-2014.01.08/_mapping shows:

"User_Name": {

"type": "multi_field",
"fields": {
"User_Name": {
"type": "string",
"omit_norms": true
},
"raw": {
"type": "string",
"index": "not_analyzed",
"omit_norms": true,
"index_options": "docs",
"include_in_all": false,
"ignore_above": 256
}
}
},

My (limited) understanding is that the "not_analyzed" should stop the
field being split, so that my searching matches the full name, but it
doesn't. I'm trying both kibana and curl to get results.

Hope this makes sense. I really like the look of elasticsearch, but being
able to search on extracted fields like this is pretty key to me using it.

Thanks.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1746973d-75ba-4dbd-a026-f5bfce663899%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(system) #10