FacetPhaseExecutionException with new Marvel installation

I've got a brand-new Marvel installation, and am having some frustrating
issues with it: on the overview screen, I am constantly getting errors like:
Oops! FacetPhaseExecutionException[Facet [timestamp]: failed to find
mapping for node.ip_port.raw]

Production cluster:

  • ElasticSearch 1.1.1
  • Marvel 1.2.1
  • Running in vSphere

Monitoring cluster:

  • ElasticSearch 1.3.4
  • Marvel 1.2.1
  • Running in AWS

After installing the plugin and bouncing all nodes in both clusters, Marvel
seems to be working -- an index has been created in the monitoring cluster (
.marvel-2014.10.26), and I see thousands of documents in there. There are
documents with the following types: cluster_state, cluster_stats,
index_stats, indices_stats, node_stats. So, it does seem that data is
being shipped from the prod cluster to the monitoring cluster.

I've seen in the user group that other people have had similar issues.
Some of those mention problems with the marvel index template. I don't
seem to have any at all templates in my monitoring cluster:

$ curl -XGET localhost:9200/_template/
{}

I tried manually adding the default template (as described in
http://www.elasticsearch.org/guide/en/marvel/current/#config-marvel-indices),
but that didn't seem to have any effect.

So far, I've seen just two specific errors in Marvel:

  • FacetPhaseExecutionException[Facet [timestamp]: failed to find mapping
    for node.ip_port.raw]
  • FacetPhaseExecutionException[Facet [timestamp]: failed to find mapping
    for index.raw]

I've also looked through the logs on both the production and monitoring
clusters, and the only errors are in the monitoring cluster resulting from
queries from the Marvel UI, like this:

[2014-10-27 11:08:13,427][DEBUG][action.search.type ] [ip-10-4-1-187]
[.marvel-2014.10.27][1], node[SR_hriFmTCav-8ofbKU-8g], [R], s[STARTED]:
Failed to execute [org.elasticsearch.action.search.SearchRequest@661dc47e]
org.elasticsearch.search.SearchParseException: [.marvel-2014.10.27][1]:
query[ConstantScore(BooleanFilter(+: +cache(_type:index_stats) +cache(
@timestamp:[1414367880000 TO 1414368540000])))],from[-1],size[10]: Parse
Failure [Failed to parse source [{"size":10,"query":{"filtered":{"query":{
"match_all":{}},"filter":{"bool":{"must":[{"match_all":{}},{"term":{"_type":
"index_stats"}},{"range":{"@timestamp":{"from":"now-10m/m","to":"now/m"
}}}]}}}},"facets":{"timestamp":{"terms_stats":{"key_field":"index.raw",
"value_field":"@timestamp","order":"term","size":2000}},
"primaries.docs.count":{"terms_stats":{"key_field":"index.raw","value_field"
:"primaries.docs.count","order":"term","size":2000}},
"primaries.indexing.index_total":{"terms_stats":{"key_field":"index.raw",
"value_field":"primaries.indexing.index_total","order":"term","size":2000}},
"total.search.query_total":{"terms_stats":{"key_field":"index.raw",
"value_field":"total.search.query_total","order":"term","size":2000}},
"total.merges.total_size_in_bytes":{"terms_stats":{"key_field":"index.raw",
"value_field":"total.merges.total_size_in_bytes","order":"term","size":2000
}},"total.fielddata.memory_size_in_bytes":{"terms_stats":{"key_field":
"index.raw","value_field":"total.fielddata.memory_size_in_bytes","order":
"term","size":2000}}}}]]
at org.elasticsearch.search.SearchService.parseSource(SearchService.
java:660)
at org.elasticsearch.search.SearchService.createContext(
SearchService.java:516)
at org.elasticsearch.search.SearchService.createAndPutContext(
SearchService.java:488)
at org.elasticsearch.search.SearchService.executeQueryPhase(
SearchService.java:257)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.
call(SearchServiceTransportAction.java:206)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.
call(SearchServiceTransportAction.java:203)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.
run(SearchServiceTransportAction.java:517)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.search.facet.FacetPhaseExecutionException:
Facet [timestamp]: failed to find mapping for index.raw
at org.elasticsearch.search.facet.termsstats.TermsStatsFacetParser.
parse(TermsStatsFacetParser.java:126)
at org.elasticsearch.search.facet.FacetParseElement.parse(
FacetParseElement.java:93)
at org.elasticsearch.search.SearchService.parseSource(SearchService.
java:644)
... 9 more
[2014-10-27 11:08:13,427][DEBUG][action.search.type ] [ip-10-4-1-187]
All shards failed for phase: [query]

Both clusters use the same timezone and their clocks are synchronized via
NTP.

Does anyone have any suggestions on what to do next? I've reinstalled the
plugin across both clusters without any changes.

Thanks much,
Ross

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/d1a5065e-8ede-4665-871b-979b6c683917%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

To troubleshoot a little more, I rebuilt the monitoring cluster to use
ElasticSearch 1.1.1, which matches the ES version used in the production
cluster. No luck.

On the Overview dashboard, I can see some data (summary, doc count, search
and indexing rates are all populated [screenshot attached]), but but both
the nodes and indices sections are empty other than the errors mentioned in
the previous post. Cluster pulse doesn't show any events at all; node
stats and index stats do both show data.

Any further suggestions would be greatly appreciated :slight_smile:

Cheers,
Ross

On Monday, 27 October 2014 11:15:42 UTC+11, Ross Simpson wrote:

I've got a brand-new Marvel installation, and am having some frustrating
issues with it: on the overview screen, I am constantly getting errors like:
Oops! FacetPhaseExecutionException[Facet [timestamp]: failed to find
mapping for node.ip_port.raw]

Production cluster:

  • ElasticSearch 1.1.1
  • Marvel 1.2.1
  • Running in vSphere

Monitoring cluster:

  • ElasticSearch 1.3.4
  • Marvel 1.2.1
  • Running in AWS

After installing the plugin and bouncing all nodes in both clusters,
Marvel seems to be working -- an index has been created in the monitoring
cluster (.marvel-2014.10.26), and I see thousands of documents in there.
There are documents with the following types: cluster_state,
cluster_stats, index_stats, indices_stats, node_stats. So, it does seem
that data is being shipped from the prod cluster to the monitoring cluster.

I've seen in the user group that other people have had similar issues.
Some of those mention problems with the marvel index template. I don't
seem to have any at all templates in my monitoring cluster:

$ curl -XGET localhost:9200/_template/
{}

I tried manually adding the default template (as described in
http://www.elasticsearch.org/guide/en/marvel/current/#config-marvel-indices),
but that didn't seem to have any effect.

So far, I've seen just two specific errors in Marvel:

  • FacetPhaseExecutionException[Facet [timestamp]: failed to find mapping
    for node.ip_port.raw]
  • FacetPhaseExecutionException[Facet [timestamp]: failed to find mapping
    for index.raw]

I've also looked through the logs on both the production and monitoring
clusters, and the only errors are in the monitoring cluster resulting from
queries from the Marvel UI, like this:

[2014-10-27 11:08:13,427][DEBUG][action.search.type ] [ip-10-4-1-187
] [.marvel-2014.10.27][1], node[SR_hriFmTCav-8ofbKU-8g], [R], s[STARTED]:
Failed to execute [org.elasticsearch.action.search.SearchRequest@661dc47e]
org.elasticsearch.search.SearchParseException: [.marvel-2014.10.27][1]:
query[ConstantScore(BooleanFilter(+: +cache(_type:index_stats) +cache(
@timestamp:[1414367880000 TO 1414368540000])))],from[-1],size[10]: Parse
Failure [Failed to parse source [{"size":10,"query":{"filtered":{"query":{
"match_all":{}},"filter":{"bool":{"must":[{"match_all":{}},{"term":{
"_type":"index_stats"}},{"range":{"@timestamp":{"from":"now-10m/m","to":
"now/m"}}}]}}}},"facets":{"timestamp":{"terms_stats":{"key_field":
"index.raw","value_field":"@timestamp","order":"term","size":2000}},
"primaries.docs.count":{"terms_stats":{"key_field":"index.raw",
"value_field":"primaries.docs.count","order":"term","size":2000}},
"primaries.indexing.index_total":{"terms_stats":{"key_field":"index.raw",
"value_field":"primaries.indexing.index_total","order":"term","size":2000
}},"total.search.query_total":{"terms_stats":{"key_field":"index.raw",
"value_field":"total.search.query_total","order":"term","size":2000}},
"total.merges.total_size_in_bytes":{"terms_stats":{"key_field":"index.raw"
,"value_field":"total.merges.total_size_in_bytes","order":"term","size":
2000}},"total.fielddata.memory_size_in_bytes":{"terms_stats":{"key_field":
"index.raw","value_field":"total.fielddata.memory_size_in_bytes","order":
"term","size":2000}}}}]]
at org.elasticsearch.search.SearchService.parseSource(
SearchService.java:660)
at org.elasticsearch.search.SearchService.createContext(
SearchService.java:516)
at org.elasticsearch.search.SearchService.createAndPutContext(
SearchService.java:488)
at org.elasticsearch.search.SearchService.executeQueryPhase(
SearchService.java:257)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.
call(SearchServiceTransportAction.java:206)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.
call(SearchServiceTransportAction.java:203)
at org.elasticsearch.search.action.SearchServiceTransportAction$23
.run(SearchServiceTransportAction.java:517)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.search.facet.FacetPhaseExecutionException:
Facet [timestamp]: failed to find mapping for index.raw
at org.elasticsearch.search.facet.termsstats.TermsStatsFacetParser
.parse(TermsStatsFacetParser.java:126)
at org.elasticsearch.search.facet.FacetParseElement.parse(
FacetParseElement.java:93)
at org.elasticsearch.search.SearchService.parseSource(
SearchService.java:644</
...

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/93908c06-d547-4d6d-bd9d-cfc807c053d9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

It looks like something is wrong is indeed wrong with your marvel index
template which should be there before data is indexed. How did you install
marvel? Did you perhaps delete the data folder of the monitoring cluster
after production was already shipping data?

Cheers,
Boaz

On Monday, October 27, 2014 7:45:34 AM UTC+1, Ross Simpson wrote:

To troubleshoot a little more, I rebuilt the monitoring cluster to use
ElasticSearch 1.1.1, which matches the ES version used in the production
cluster. No luck.

On the Overview dashboard, I can see some data (summary, doc count, search
and indexing rates are all populated [screenshot attached]), but but both
the nodes and indices sections are empty other than the errors mentioned in
the previous post. Cluster pulse doesn't show any events at all; node
stats and index stats do both show data.

Any further suggestions would be greatly appreciated :slight_smile:

Cheers,
Ross

On Monday, 27 October 2014 11:15:42 UTC+11, Ross Simpson wrote:

I've got a brand-new Marvel installation, and am having some frustrating
issues with it: on the overview screen, I am constantly getting errors like:
Oops! FacetPhaseExecutionException[Facet [timestamp]: failed to find
mapping for node.ip_port.raw]

Production cluster:

  • ElasticSearch 1.1.1
  • Marvel 1.2.1
  • Running in vSphere

Monitoring cluster:

  • ElasticSearch 1.3.4
  • Marvel 1.2.1
  • Running in AWS

After installing the plugin and bouncing all nodes in both clusters,
Marvel seems to be working -- an index has been created in the monitoring
cluster (.marvel-2014.10.26), and I see thousands of documents in there.
There are documents with the following types: cluster_state,
cluster_stats, index_stats, indices_stats, node_stats. So, it does seem
that data is being shipped from the prod cluster to the monitoring cluster.

I've seen in the user group that other people have had similar issues.
Some of those mention problems with the marvel index template. I don't
seem to have any at all templates in my monitoring cluster:

$ curl -XGET localhost:9200/_template/
{}

I tried manually adding the default template (as described in
http://www.elasticsearch.org/guide/en/marvel/current/#config-marvel-indices),
but that didn't seem to have any effect.

So far, I've seen just two specific errors in Marvel:

  • FacetPhaseExecutionException[Facet [timestamp]: failed to find mapping
    for node.ip_port.raw]
  • FacetPhaseExecutionException[Facet [timestamp]: failed to find mapping
    for index.raw]

I've also looked through the logs on both the production and monitoring
clusters, and the only errors are in the monitoring cluster resulting from
queries from the Marvel UI, like this:

[2014-10-27 11:08:13,427][DEBUG][action.search.type ] [ip-10-4-1-
187] [.marvel-2014.10.27][1], node[SR_hriFmTCav-8ofbKU-8g], [R], s[
STARTED]: Failed to execute [org.elasticsearch.action.search.
SearchRequest@661dc47e]
org.elasticsearch.search.SearchParseException: [.marvel-2014.10.27][1]:
query[ConstantScore(BooleanFilter(+: +cache(_type:index_stats) +cache(
@timestamp:[1414367880000 TO 1414368540000])))],from[-1],size[10]: Parse
Failure [Failed to parse source [{"size":10,"query":{"filtered":{"query"
:{"match_all":{}},"filter":{"bool":{"must":[{"match_all":{}},{"term":{
"_type":"index_stats"}},{"range":{"@timestamp":{"from":"now-10m/m","to":
"now/m"}}}]}}}},"facets":{"timestamp":{"terms_stats":{"key_field":
"index.raw","value_field":"@timestamp","order":"term","size":2000}},
"primaries.docs.count":{"terms_stats":{"key_field":"index.raw",
"value_field":"primaries.docs.count","order":"term","size":2000}},
"primaries.indexing.index_total":{"terms_stats":{"key_field":"index.raw",
"value_field":"primaries.indexing.index_total","order":"term","size":2000
}},"total.search.query_total":{"terms_stats":{"key_field":"index.raw",
"value_field":"total.search.query_total","order":"term","size":2000}},
"total.merges.total_size_in_bytes":{"terms_stats":{"key_field":
"index.raw","value_field":"total.merges.total_size_in_bytes","order":
"term","size":2000}},"total.fielddata.memory_size_in_bytes":{
"terms_stats":{"key_field":"index.raw","value_field":
"total.fielddata.memory_size_in_bytes","order":"term","size":2000}}}}]]
at org.elasticsearch.search.SearchService.parseSource(
SearchService.java:660)
at org.elasticsearch.search.SearchService.createContext(
SearchService.java:516)
at org.elasticsearch.search.SearchService.createAndPutContext(
SearchService.java:488)
at org.elasticsearch.search.SearchService.executeQueryPhase(
SearchService.java:257)
at org.elasticsearch.search.action.SearchServiceTransportAction$5
.call(SearchServiceTransportAction.java:206)
at org.elasticsearch.search.action.SearchServiceTransportAction$5
.call(SearchServiceTransportAction.java:203)
at org.elasticsearch.search.action.
SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517
)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.search.facet.FacetPhaseExecutionException:
Facet [timestamp]: failed to find mapping for index.raw
at org.elasticsearch.search.facet.termsstats.
TermsStatsFacetParser.parse(TermsStatsFacetParser.java:126)
at org.elasticsearch.search.facet.FacetParseElement.parse(
FacetParseElement.java:93)
at org.elasticsearch.search.SearchService.parseSource(
SearchService.java:644</
...

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/e8f0ef26-33e2-4007-8126-58ae268e8452%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hi Boaz,

To install, I ran

bin/plugin --install elasticsearch/marvel/latest

on each node in both clusters, then restarted both clusters.

Since then, I have tried several things, including deleting the indexes
from the monitoring cluster and reinstalling the plugin on the monitoring
cluster. I'll try now to delete all the marvel indexes, uninstall, then
reinstall marvel into both clusters.

I'm a bit stumped otherwise, so I'm all ears for any other suggestions.

Cheers,
Ross

On Tuesday, 28 October 2014 08:30:54 UTC+11, Boaz Leskes wrote:

It looks like something is wrong is indeed wrong with your marvel index
template which should be there before data is indexed. How did you install
marvel? Did you perhaps delete the data folder of the monitoring cluster
after production was already shipping data?

Cheers,
Boaz

On Monday, October 27, 2014 7:45:34 AM UTC+1, Ross Simpson wrote:

To troubleshoot a little more, I rebuilt the monitoring cluster to use
ElasticSearch 1.1.1, which matches the ES version used in the production
cluster. No luck.

On the Overview dashboard, I can see some data (summary, doc count,
search and indexing rates are all populated [screenshot attached]), but but
both the nodes and indices sections are empty other than the errors
mentioned in the previous post. Cluster pulse doesn't show any events at
all; node stats and index stats do both show data.

Any further suggestions would be greatly appreciated :slight_smile:

Cheers,
Ross

On Monday, 27 October 2014 11:15:42 UTC+11, Ross Simpson wrote:

I've got a brand-new Marvel installation, and am having some frustrating
issues with it: on the overview screen, I am constantly getting errors like:
Oops! FacetPhaseExecutionException[Facet [timestamp]: failed to find
mapping for node.ip_port.raw]

Production cluster:

  • ElasticSearch 1.1.1
  • Marvel 1.2.1
  • Running in vSphere

Monitoring cluster:

  • ElasticSearch 1.3.4
  • Marvel 1.2.1
  • Running in AWS

After installing the plugin and bouncing all nodes in both clusters,
Marvel seems to be working -- an index has been created in the monitoring
cluster (.marvel-2014.10.26), and I see thousands of documents in
there. There are documents with the following types: cluster_state,
cluster_stats, index_stats, indices_stats, node_stats. So, it does
seem that data is being shipped from the prod cluster to the monitoring
cluster.

I've seen in the user group that other people have had similar issues.
Some of those mention problems with the marvel index template. I don't
seem to have any at all templates in my monitoring cluster:

$ curl -XGET localhost:9200/_template/
{}

I tried manually adding the default template (as described in
http://www.elasticsearch.org/guide/en/marvel/current/#config-marvel-indices),
but that didn't seem to have any effect.

So far, I've seen just two specific errors in Marvel:

  • FacetPhaseExecutionException[Facet [timestamp]: failed to find mapping
    for node.ip_port.raw]
  • FacetPhaseExecutionException[Facet [timestamp]: failed to find mapping
    for index.raw]

I've also looked through the logs on both the production and monitoring
clusters, and the only errors are in the monitoring cluster resulting from
queries from the Marvel UI, like this:

[2014-10-27 11:08:13,427][DEBUG][action.search.type ] [ip-10-4-1-
187] [.marvel-2014.10.27][1], node[SR_hriFmTCav-8ofbKU-8g], [R], s[
STARTED]: Failed to execute [org.elasticsearch.action.search.
SearchRequest@661dc47e]
org.elasticsearch.search.SearchParseException: [.marvel-2014.10.27][1]:
query[ConstantScore(BooleanFilter(+: +cache(_type:index_stats) +cache(
@timestamp:[1414367880000 TO 1414368540000])))],from[-1],size[10]: Parse
Failure [Failed to parse source [{"size":10,"query":{"filtered":{"query"
:{"match_all":{}},"filter":{"bool":{"must":[{"match_all":{}},{"term":{
"_type":"index_stats"}},{"range":{"@timestamp":{"from":"now-10m/m","to":
"now/m"}}}]}}}},"facets":{"timestamp":{"terms_stats":{"key_field":
"index.raw","value_field":"@timestamp","order":"term","size":2000}},
"primaries.docs.count":{"terms_stats":{"key_field":"index.raw",
"value_field":"primaries.docs.count","order":"term","size":2000}},
"primaries.indexing.index_total":{"terms_stats":{"key_field":"index.raw"
,"value_field":"primaries.indexing.index_total","order":"term","size":
2000}},"total.search.query_total":{"terms_stats":{"key_field":
"index.raw","value_field":"total.search.query_total","order":"term",
"size":2000}},"total.merges.total_size_in_bytes":{"terms_stats":{
"key_field":"index.raw","value_field":"total.merges.total_size_in_bytes"
,"order":"term","size":2000}},"total.fielddata.memory_size_in_bytes":{
"terms_stats":{"key_field":"index.raw","value_field":
"total.fielddata.memory_size_in_bytes","order":"term","size":2000}}}}]]
at org.elasticsearch.search.SearchService.parseSource(
SearchService.java:660)
at org.elasticsearch.search.SearchService.createContext(
SearchService.java:516)
at org.elasticsearch.search.SearchService.createAndPutContext(
SearchService.java:488)
at org.elasticsearch.search.SearchService.executeQueryPhase(
SearchService.java:257)
at org.elasticsearch.search.action.
SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:
206)
at org.elasticsearch.search.action.
SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:
203)
at org.elasticsearch.search.action.
SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:
517)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.search.facet.FacetPhaseExecutionException:
Facet [timestamp]: failed to find mapping for index.raw
at org.elasticsearch.search.facet.termsstats.
TermsStatsFacetParser.parse(TermsStatsFacetParser.java:126)
at org.elasticsearch.search.facet.FacetParseElement.parse(
FacetParseElement.java:93)
at org.elasticsearch.search.SearchService.parseSource(
SearchService.java:644</
...

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/7157c765-2576-4ae3-b83f-3a1753f4d2ec%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hey,

You probably did but just double checking- did you change the settings in the yaml files before restarting the nodes?

There is an easier way to fix this than a full restart: first restart a single node on production. That will cause the agent to check again for the template. Verify that the template was added. The delete all .marvel-2014* indices on the monitoring cluster and let them be recreated base on the template.

Boaz


Sent from Mailbox

On Mon, Oct 27, 2014 at 11:25 PM, Ross Simpson simpsora@gmail.com wrote:

Hi Boaz,
To install, I ran
bin/plugin --install elasticsearch/marvel/latest
on each node in both clusters, then restarted both clusters.
Since then, I have tried several things, including deleting the indexes
from the monitoring cluster and reinstalling the plugin on the monitoring
cluster. I'll try now to delete all the marvel indexes, uninstall, then
reinstall marvel into both clusters.
I'm a bit stumped otherwise, so I'm all ears for any other suggestions.
Cheers,
Ross
On Tuesday, 28 October 2014 08:30:54 UTC+11, Boaz Leskes wrote:

It looks like something is wrong is indeed wrong with your marvel index
template which should be there before data is indexed. How did you install
marvel? Did you perhaps delete the data folder of the monitoring cluster
after production was already shipping data?

Cheers,
Boaz

On Monday, October 27, 2014 7:45:34 AM UTC+1, Ross Simpson wrote:

To troubleshoot a little more, I rebuilt the monitoring cluster to use
ElasticSearch 1.1.1, which matches the ES version used in the production
cluster. No luck.

On the Overview dashboard, I can see some data (summary, doc count,
search and indexing rates are all populated [screenshot attached]), but but
both the nodes and indices sections are empty other than the errors
mentioned in the previous post. Cluster pulse doesn't show any events at
all; node stats and index stats do both show data.

Any further suggestions would be greatly appreciated :slight_smile:

Cheers,
Ross

On Monday, 27 October 2014 11:15:42 UTC+11, Ross Simpson wrote:

I've got a brand-new Marvel installation, and am having some frustrating
issues with it: on the overview screen, I am constantly getting errors like:
Oops! FacetPhaseExecutionException[Facet [timestamp]: failed to find
mapping for node.ip_port.raw]

Production cluster:

  • ElasticSearch 1.1.1
  • Marvel 1.2.1
  • Running in vSphere

Monitoring cluster:

  • ElasticSearch 1.3.4
  • Marvel 1.2.1
  • Running in AWS

After installing the plugin and bouncing all nodes in both clusters,
Marvel seems to be working -- an index has been created in the monitoring
cluster (.marvel-2014.10.26), and I see thousands of documents in
there. There are documents with the following types: cluster_state,
cluster_stats, index_stats, indices_stats, node_stats. So, it does
seem that data is being shipped from the prod cluster to the monitoring
cluster.

I've seen in the user group that other people have had similar issues.
Some of those mention problems with the marvel index template. I don't
seem to have any at all templates in my monitoring cluster:

$ curl -XGET localhost:9200/_template/
{}

I tried manually adding the default template (as described in
http://www.elasticsearch.org/guide/en/marvel/current/#config-marvel-indices),
but that didn't seem to have any effect.

So far, I've seen just two specific errors in Marvel:

  • FacetPhaseExecutionException[Facet [timestamp]: failed to find mapping
    for node.ip_port.raw]
  • FacetPhaseExecutionException[Facet [timestamp]: failed to find mapping
    for index.raw]

I've also looked through the logs on both the production and monitoring
clusters, and the only errors are in the monitoring cluster resulting from
queries from the Marvel UI, like this:

[2014-10-27 11:08:13,427][DEBUG][action.search.type ] [ip-10-4-1-
187] [.marvel-2014.10.27][1], node[SR_hriFmTCav-8ofbKU-8g], [R], s[
STARTED]: Failed to execute [org.elasticsearch.action.search.
SearchRequest@661dc47e]
org.elasticsearch.search.SearchParseException: [.marvel-2014.10.27][1]:
query[ConstantScore(BooleanFilter(+: +cache(_type:index_stats) +cache(
@timestamp:[1414367880000 TO 1414368540000])))],from[-1],size[10]: Parse
Failure [Failed to parse source [{"size":10,"query":{"filtered":{"query"
:{"match_all":{}},"filter":{"bool":{"must":[{"match_all":{}},{"term":{
"_type":"index_stats"}},{"range":{"@timestamp":{"from":"now-10m/m","to":
"now/m"}}}]}}}},"facets":{"timestamp":{"terms_stats":{"key_field":
"index.raw","value_field":"@timestamp","order":"term","size":2000}},
"primaries.docs.count":{"terms_stats":{"key_field":"index.raw",
"value_field":"primaries.docs.count","order":"term","size":2000}},
"primaries.indexing.index_total":{"terms_stats":{"key_field":"index.raw"
,"value_field":"primaries.indexing.index_total","order":"term","size":
2000}},"total.search.query_total":{"terms_stats":{"key_field":
"index.raw","value_field":"total.search.query_total","order":"term",
"size":2000}},"total.merges.total_size_in_bytes":{"terms_stats":{
"key_field":"index.raw","value_field":"total.merges.total_size_in_bytes"
,"order":"term","size":2000}},"total.fielddata.memory_size_in_bytes":{
"terms_stats":{"key_field":"index.raw","value_field":
"total.fielddata.memory_size_in_bytes","order":"term","size":2000}}}}]]
at org.elasticsearch.search.SearchService.parseSource(
SearchService.java:660)
at org.elasticsearch.search.SearchService.createContext(
SearchService.java:516)
at org.elasticsearch.search.SearchService.createAndPutContext(
SearchService.java:488)
at org.elasticsearch.search.SearchService.executeQueryPhase(
SearchService.java:257)
at org.elasticsearch.search.action.
SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:
206)
at org.elasticsearch.search.action.
SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:
203)
at org.elasticsearch.search.action.
SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:
517)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.search.facet.FacetPhaseExecutionException:
Facet [timestamp]: failed to find mapping for index.raw
at org.elasticsearch.search.facet.termsstats.
TermsStatsFacetParser.parse(TermsStatsFacetParser.java:126)
at org.elasticsearch.search.facet.FacetParseElement.parse(
FacetParseElement.java:93)
at org.elasticsearch.search.SearchService.parseSource(
SearchService.java:644</
...

--
You received this message because you are subscribed to a topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elasticsearch/ormCr0X7QoI/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/7157c765-2576-4ae3-b83f-3a1753f4d2ec%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1414450083301.0f56a40a%40Nodemailer.
For more options, visit https://groups.google.com/d/optout.

Hi again,

Yep, I had added the required settings to the yaml files first.

I tried the steps you described above, and it did not have any effect --
still no template present, and still getting the error. Since it wasn't
too much trouble, I started over from scratch -- rebuilt the monitoring
cluster, but also uninstalled then reinstalled the plugin in the production
cluster, and restarted. After this, I saw a bunch of update_mapping calls.
The template was present, and the errors went away. It seems that some
state regarding Marvel is kept in the production cluster, and whatever it
was got cleared when I reinstalled the plugin there. That may be something
worth mentioning in the installation docs.

In any case, thanks for your help -- it's all working now!

Cheers,
Ross

On Tuesday, 28 October 2014 09:48:12 UTC+11, Boaz Leskes wrote:

Hey,

You probably did but just double checking- did you change the settings in
the yaml files before restarting the nodes?

There is an easier way to fix this than a full restart: first restart a
single node on production. That will cause the agent to check again for the
template. Verify that the template was added. The delete all .marvel-2014*
indices on the monitoring cluster and let them be recreated base on the
template.

Boaz


Sent from Mailbox https://www.dropbox.com/mailbox

On Mon, Oct 27, 2014 at 11:25 PM, Ross Simpson <simp...@gmail.com
<javascript:>> wrote:

Hi Boaz,

To install, I ran

bin/plugin --install elasticsearch/marvel/latest

on each node in both clusters, then restarted both clusters.

Since then, I have tried several things, including deleting the indexes
from the monitoring cluster and reinstalling the plugin on the monitoring
cluster. I'll try now to delete all the marvel indexes, uninstall, then
reinstall marvel into both clusters.

I'm a bit stumped otherwise, so I'm all ears for any other suggestions.

Cheers,
Ross

On Tuesday, 28 October 2014 08:30:54 UTC+11, Boaz Leskes wrote:

It looks like something is wrong is indeed wrong with your marvel index
template which should be there before data is indexed. How did you install
marvel? Did you perhaps delete the data folder of the monitoring cluster
after production was already shipping data?

Cheers,
Boaz

On Monday, October 27, 2014 7:45:34 AM UTC+1, Ross Simpson wrote:

To troubleshoot a little more, I rebuilt the monitoring cluster to use
ElasticSearch 1.1.1, which matches the ES version used in the production
cluster. No luck.

On the Overview dashboard, I can see some data (summary, doc count,
search and indexing rates are all populated [screenshot attached]), but but
both the nodes and indices sections are empty other than the errors
mentioned in the previous post. Cluster pulse doesn't show any events at
all; node stats and index stats do both show data.

Any further suggestions would be greatly appreciated :slight_smile:

Cheers,
Ross

On Monday, 27 October 2014 11:15:42 UTC+11, Ross Simpson wrote:

I've got a brand-new Marvel installation, and am having some
frustrating issues with it: on the overview screen, I am constantly getting
errors like:
Oops! FacetPhaseExecutionException[Facet [timestamp]: failed to
find mapping for node.ip_port.raw]

Production cluster:

  • ElasticSearch 1.1.1
  • Marvel 1.2.1
  • Running in vSphere

Monitoring cluster:

  • ElasticSearch 1.3.4
  • Marvel 1.2.1
  • Running in AWS

After installing the plugin and bouncing all nodes in both clusters,
Marvel seems to be working -- an index has been created in the monitoring
cluster (.marvel-2014.10.26), and I see thousands of documents in
there. There are documents with the following types: cluster_state,
cluster_stats, index_stats, indices_stats, node_stats. So, it does
seem that data is being shipped from the prod cluster to the monitoring
cluster.

I've seen in the user group that other people have had similar issues.
Some of those mention problems with the marvel index template. I don't
seem to have any at all templates in my monitoring cluster:

$ curl -XGET localhost:9200/_template/
{}

I tried manually adding the default template (as described in
http://www.elasticsearch.org/guide/en/marvel/current/#config-marvel-indices),
but that didn't seem to have any effect.

So far, I've seen just two specific errors in Marvel:

  • FacetPhaseExecutionException[Facet [timestamp]: failed to find
    mapping for node.ip_port.raw]
  • FacetPhaseExecutionException[Facet [timestamp]: failed to find
    mapping for index.raw]

I've also looked through the logs on both the production and
monitoring clusters, and the only errors are in the monitoring cluster
resulting from queries from the Marvel UI, like this:

[2014-10-27 11:08:13,427][DEBUG][action.search.type ] [ip-10-4-
1-187] [.marvel-2014.10.27][1], node[SR_hriFmTCav-8ofbKU-8g], [R], s[
STARTED]: Failed to execute [org.elasticsearch.action.search.
SearchRequest@661dc47e]
org.elasticsearch.search.SearchParseException: [.marvel-2014.10.27][1
]: query[ConstantScore(BooleanFilter(+: +cache(_type:index_stats) +
cache(@timestamp:[1414367880000 TO 1414368540000])))],from[-1],size[10
]: Parse Failure [Failed to parse source [{"size":10,"query":{
"filtered":{"query":{"match_all":{}},"filter":{"bool":{"must":[{
"match_all":{}},{"term":{"_type":"index_stats"}},{"range":{
"@timestamp":{"from":"now-10m/m","to":"now/m"}}}]}}}},"facets":{
"timestamp":{"terms_stats":{"key_field":"index.raw","value_field":
"@timestamp","order":"term","size":2000}},"primaries.docs.count":{
"terms_stats":{"key_field":"index.raw","value_field":
"primaries.docs.count","order":"term","size":2000}},
"primaries.indexing.index_total":{"terms_stats":{"key_field":
"index.raw","value_field":"primaries.indexing.index_total","order":
"term","size":2000}},"total.search.query_total":{"terms_stats":{
"key_field":"index.raw","value_field":"total.search.query_total",
"order":"term","size":2000}},"total.merges.total_size_in_bytes":{
"terms_stats":{"key_field":"index.raw","value_field":
"total.merges.total_size_in_bytes","order":"term","size":2000}},
"total.fielddata.memory_size_in_bytes":{"terms_stats":{"key_field":
"index.raw","value_field":"total.fielddata.memory_size_in_bytes",
"order":"term","size":2000}}}}]]
at org.elasticsearch.search.SearchService.parseSource(
SearchService.java:660)
at org.elasticsearch.search.SearchService.createContext(
SearchService.java:516)
at org.elasticsearch.search.SearchService.createAndPutContext(
SearchService.java:488)
at org.elasticsearch.search.SearchService.executeQueryPhase(
SearchService.java:257)
at org.elasticsearch.search.action.
SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:
206)
at org.elasticsearch.search.action.
SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:
203)
at org.elasticsearch.search.action.
SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:
517)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.search.facet.FacetPhaseExecutionException
: Facet [timestamp]: failed to find mapping for index.raw
at org.elasticsearch.search.facet.termsstats.
TermsStatsFacetParser.parse(TermsStatsFacetParser.java:126)
at org.elasticsearch.search.facet.FacetParseElement.parse(
FacetParseElement.java:93)
at org.elasticsearch.search.SearchService.parseSource(
SearchService.java:644</
...

-- 

You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/ormCr0X7QoI/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/7157c765-2576-4ae3-b83f-3a1753f4d2ec%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/7157c765-2576-4ae3-b83f-3a1753f4d2ec%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/415a71cc-f593-4398-b57d-fcfa0fba36f4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Good it works now.

it seems that some state regarding Marvel is kept in the production
cluster, and whatever it was got cleared when I reinstalled the plugin
there.

There is no state in production cluster, just an in memory boolean of
wether the local agent checked for the template or not. Every time the
agent wakes up it checks.

Is anything interesting the logs perhaps?

On Tue, Oct 28, 2014 at 12:45 AM, Ross Simpson simpsora@gmail.com wrote:

Hi again,

Yep, I had added the required settings to the yaml files first.

I tried the steps you described above, and it did not have any effect --
still no template present, and still getting the error. Since it wasn't
too much trouble, I started over from scratch -- rebuilt the monitoring
cluster, but also uninstalled then reinstalled the plugin in the production
cluster, and restarted. After this, I saw a bunch of update_mapping
calls. The template was present, and the errors went away. It seems that
some state regarding Marvel is kept in the production cluster, and whatever
it was got cleared when I reinstalled the plugin there. That may be
something worth mentioning in the installation docs.

In any case, thanks for your help -- it's all working now!

Cheers,
Ross

On Tuesday, 28 October 2014 09:48:12 UTC+11, Boaz Leskes wrote:

Hey,

You probably did but just double checking- did you change the settings in
the yaml files before restarting the nodes?

There is an easier way to fix this than a full restart: first restart a
single node on production. That will cause the agent to check again for the
template. Verify that the template was added. The delete all .marvel-2014*
indices on the monitoring cluster and let them be recreated base on the
template.

Boaz


Sent from Mailbox https://www.dropbox.com/mailbox

On Mon, Oct 27, 2014 at 11:25 PM, Ross Simpson simp...@gmail.com wrote:

Hi Boaz,

To install, I ran

bin/plugin --install elasticsearch/marvel/latest

on each node in both clusters, then restarted both clusters.

Since then, I have tried several things, including deleting the indexes
from the monitoring cluster and reinstalling the plugin on the monitoring
cluster. I'll try now to delete all the marvel indexes, uninstall, then
reinstall marvel into both clusters.

I'm a bit stumped otherwise, so I'm all ears for any other suggestions.

Cheers,
Ross

On Tuesday, 28 October 2014 08:30:54 UTC+11, Boaz Leskes wrote:

It looks like something is wrong is indeed wrong with your marvel index
template which should be there before data is indexed. How did you install
marvel? Did you perhaps delete the data folder of the monitoring cluster
after production was already shipping data?

Cheers,
Boaz

On Monday, October 27, 2014 7:45:34 AM UTC+1, Ross Simpson wrote:

To troubleshoot a little more, I rebuilt the monitoring cluster to use
ElasticSearch 1.1.1, which matches the ES version used in the production
cluster. No luck.

On the Overview dashboard, I can see some data (summary, doc count,
search and indexing rates are all populated [screenshot attached]), but but
both the nodes and indices sections are empty other than the errors
mentioned in the previous post. Cluster pulse doesn't show any events at
all; node stats and index stats do both show data.

Any further suggestions would be greatly appreciated :slight_smile:

Cheers,
Ross

On Monday, 27 October 2014 11:15:42 UTC+11, Ross Simpson wrote:

I've got a brand-new Marvel installation, and am having some
frustrating issues with it: on the overview screen, I am constantly getting
errors like:
Oops! FacetPhaseExecutionException[Facet [timestamp]: failed to
find mapping for node.ip_port.raw]

Production cluster:

  • ElasticSearch 1.1.1
  • Marvel 1.2.1
  • Running in vSphere

Monitoring cluster:

  • ElasticSearch 1.3.4
  • Marvel 1.2.1
  • Running in AWS

After installing the plugin and bouncing all nodes in both clusters,
Marvel seems to be working -- an index has been created in the monitoring
cluster (.marvel-2014.10.26), and I see thousands of documents in
there. There are documents with the following types: cluster_state,
cluster_stats, index_stats, indices_stats, node_stats. So, it does
seem that data is being shipped from the prod cluster to the monitoring
cluster.

I've seen in the user group that other people have had similar
issues. Some of those mention problems with the marvel index template. I
don't seem to have any at all templates in my monitoring cluster:

$ curl -XGET localhost:9200/_template/
{}

I tried manually adding the default template (as described in
http://www.elasticsearch.org/guide/en/marvel/current/#
config-marvel-indices), but that didn't seem to have any effect.

So far, I've seen just two specific errors in Marvel:

  • FacetPhaseExecutionException[Facet [timestamp]: failed to find
    mapping for node.ip_port.raw]
  • FacetPhaseExecutionException[Facet [timestamp]: failed to find
    mapping for index.raw]

I've also looked through the logs on both the production and
monitoring clusters, and the only errors are in the monitoring cluster
resulting from queries from the Marvel UI, like this:

[2014-10-27 11:08:13,427][DEBUG][action.search.type ] [ip-10-4
-1-187] [.marvel-2014.10.27][1], node[SR_hriFmTCav-8ofbKU-8g], [R], s
[STARTED]: Failed to execute [org.elasticsearch.action.search.
SearchRequest@661dc47e]
org.elasticsearch.search.SearchParseException: [.marvel-2014.10.27][1
]: query[ConstantScore(BooleanFilter(+: +cache(_type:index_stats) +
cache(@timestamp:[1414367880000 TO 1414368540000])))],from[-1],size[
10]: Parse Failure [Failed to parse source [{"size":10,"query":{"
filtered":{"query":{"match_all":{}},"filter":{"bool":{"must":[{
"match_all":{}},{"term":{"_type":"index_stats"}},{"range":{
"@timestamp":{"from":"now-10m/m","to":"now/m"}}}]}}}},"facets":{
"timestamp":{"terms_stats":{"key_field":"index.raw","value_field":"@
timestamp","order":"term","size":2000}},"primaries.docs.count":{
"terms_stats":{"key_field":"index.raw","value_field":
"primaries.docs.count","order":"term","size":2000}},"
primaries.indexing.index_total":{"terms_stats":{"key_field":
"index.raw","value_field":"primaries.indexing.index_total","order":
"term","size":2000}},"total.search.query_total":{"terms_stats":{"
key_field":"index.raw","value_field":"total.search.query_total",
"order":"term","size":2000}},"total.merges.total_size_in_bytes":{
"terms_stats":{"key_field":"index.raw","value_field":"total.merges.
total_size_in_bytes","order":"term","size":2000}},"total.
fielddata.memory_size_in_bytes":{"terms_stats":{"key_field":
"index.raw","value_field":"total.fielddata.memory_size_in_bytes",
"order":"term","size":2000}}}}]]
at org.elasticsearch.search.SearchService.parseSource(
SearchService.java:660)
at org.elasticsearch.search.SearchService.createContext(
SearchService.java:516)
at org.elasticsearch.search.SearchService.createAndPutContext
(SearchService.java:488)
at org.elasticsearch.search.SearchService.executeQueryPhase(
SearchService.java:257)
at org.elasticsearch.search.action.
SearchServiceTransportAction$5.call(SearchServiceTransportAction.java
:206)
at org.elasticsearch.search.action.
SearchServiceTransportAction$5.call(SearchServiceTransportAction.java
:203)
at org.elasticsearch.search.action.
SearchServiceTransportAction$23.run(SearchServiceTransportAction.java
:517)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.search.facet.
FacetPhaseExecutionException: Facet [timestamp]: failed to find
mapping for index.raw
at org.elasticsearch.search.facet.termsstats.
TermsStatsFacetParser.parse(TermsStatsFacetParser.java:126)
at org.elasticsearch.search.facet.FacetParseElement.parse(
FacetParseElement.java:93)
at org.elasticsearch.search.SearchService.parseSource(
SearchService.java:644</
...

--

You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/ormCr0X7QoI/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/7157c765-2576-4ae3-b83f-3a1753f4d2ec%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/7157c765-2576-4ae3-b83f-3a1753f4d2ec%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/ormCr0X7QoI/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/415a71cc-f593-4398-b57d-fcfa0fba36f4%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/415a71cc-f593-4398-b57d-fcfa0fba36f4%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKzwz0rUQ49M-32ZmhCNfxwswoi57455ZjS_u0DPySeg8D8j1w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.