Corrupted ElasticSearch index?

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch (0.90.10)

  • kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some tests
for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3 days
to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name* ... a couple of other
    columns*
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet [0]:
    (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name] Message
not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name] Message
not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name] Message
not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records 'manually',
    i can see only elasticsearch columns (_index, _type, _id, _score).
    But when I 'randomly' select columns and overview their raw json they
    look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to elasticsearch and
now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/426a8411-bf03-42d3-9b77-1e45dace98ff%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Really, no clue?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/b864ece4-a47c-4999-8d9d-45ab1e11403b%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Your error logs seem to indicate some kind of version mismatch. Is it
possible for you to test LS 1.3.2 against ES 0.90.9 and take a sample of
raw logs from those 3 days and test them through to see if those 3 days
work in Kibana? The reason I ask is because LS 1.3.2 (specifically the
elasticsearch output) was built using the binaries from ES 0.90.9.

Thanks.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/0463e9d9-91c0-41c4-8f72-7a689400ed0e%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Sorry for the delay.

Looks like you were right, after downgrading ES to 0.90.9 i couldn't
reproduce the issue in such manner.

Unfortunately, I found some other problems, and one looks like a blocker
....

After whole ES cluster powerdown, ES just started replaying 'no mapping for
... ' for each request.

W dniu czwartek, 20 lutego 2014 16:42:20 UTC+1 użytkownik Binh Ly napisał:

Your error logs seem to indicate some kind of version mismatch. Is it
possible for you to test LS 1.3.2 against ES 0.90.9 and take a sample of
raw logs from those 3 days and test them through to see if those 3 days
work in Kibana? The reason I ask is because LS 1.3.2 (specifically the
elasticsearch output) was built using the binaries from ES 0.90.9.

Thanks.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/c04a560e-58b3-4ced-bcf1-97d62e8e3703%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hi Bizzorama,

I had a similar problem with the same configuration than you gave.
ES ran since the 11th of February and was fed every day at 6:00 AM by 2 LS. Everything worked well (kibana reports were correct and no data loss) until I restarted yesterday ES :frowning:
Among 30 index (1 per day), 4 were unusable and data within kibana report for the related period were unavailable.

I would like to know when you downgrade ES to 0.90.9 if you get back data (i.e you was able to show your data in kibana reports) ?

I will try to downgrade ES version as you suggested and will let you know more

Thanks for your answer

Hi Bizzorama,

I had a similar problem with the same configuration than you gave.
ES ran since the 11th of February and was fed every day at 6:00 AM by 2 LS.
Everything worked well (kibana reports were correct and no data loss) until
I restarted yesterday ES :frowning:
Among 30 index (1 per day), 4 were unusable and data within kibana report
for the related period were unavailable (same
org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet[0]:
(key) field [@timestamp] not found)

Do you confirm when you downgraded ES to 0.90.9 that you retrieved your data
(i.e you was able to show your data in kibana reports) ?

I will try to downgrade ES version as you suggested and will let you know
more

Thanks for your answer

Sorry for the delay.

Looks like you were right, after downgrading ES to 0.90.9 i couldn't
reproduce the issue in such manner.

Unfortunately, I found some other problems, and one looks like a blocker
....

After whole ES cluster powerdown, ES just started replaying 'no mapping for
... ' for each request.

W dniu czwartek, 20 lutego 2014 16:42:20 UTC+1 użytkownik Binh Ly napisał:

Your error logs seem to indicate some kind of version mismatch. Is it
possible for you to test LS 1.3.2 against ES 0.90.9 and take a sample of
raw logs from those 3 days and test them through to see if those 3 days
work in Kibana? The reason I ask is because LS 1.3.2 (specifically the
elasticsearch output) was built using the binaries from ES 0.90.9.

Thanks.

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some tests
for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3 days
to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name* ... a couple of other
    columns*
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet [0]:
    (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name] Message
not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name] Message
not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name] Message
not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records 'manually',
    i can see only elasticsearch columns (_index, _type, _id, _score).
    But when I 'randomly' select columns and overview their raw json they
    look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to elasticsearch and
now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some tests
for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3 days
to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name* ... a couple of other
    columns*
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet [0]:
    (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name] Message
not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name] Message
not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name] Message
not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records 'manually',
    i can see only elasticsearch columns (_index, _type, _id, _score).
    But when I 'randomly' select columns and overview their raw json they
    look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to elasticsearch and
now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/19537901-c1fa-4663-bfd3-c5ca02905214%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hi,

it turned out that it was not a problem of ES version (we tested on both
0.90.10 and 0.90.9) but just a ES bug ...
after restarting pc or even just the service indices got broken ... we
found out that this was the case of missing mappings.
We observed that broken indices had their mappings corrupted (only some
default fields were observed).
You can check this by calling: http:\es_address:9200\indexName_mapping

Our mappings were dynamic (not set manually - just figured out by ES when
the records were incoming).

The solution was to add a static mapping file like the one described here:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-conf-mappings.html
(we added the default one).

I just copied mappings from a healty index, made some changes, turned it to
a mapping file and copied to the ES server.

Now everything works just fine.

Regards,
Karol

W dniu niedziela, 16 marca 2014 14:54:00 UTC+1 użytkownik Mac Jouz napisał:

Hi Bizzorama,

I had a similar problem with the same configuration than you gave.
ES ran since the 11th of February and was fed every day at 6:00 AM by 2 LS.
Everything worked well (kibana reports were correct and no data loss) until
I restarted yesterday ES :frowning:
Among 30 index (1 per day), 4 were unusable and data within kibana report
for the related period were unavailable (same
org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet[0]:
(key) field [@timestamp] not found)

Do you confirm when you downgraded ES to 0.90.9 that you retrieved your
data
(i.e you was able to show your data in kibana reports) ?

I will try to downgrade ES version as you suggested and will let you know
more

Thanks for your answer

Sorry for the delay.

Looks like you were right, after downgrading ES to 0.90.9 i couldn't
reproduce the issue in such manner.

Unfortunately, I found some other problems, and one looks like a blocker
....

After whole ES cluster powerdown, ES just started replaying 'no mapping
for ... ' for each request.

W dniu czwartek, 20 lutego 2014 16:42:20 UTC+1 użytkownik Binh Ly napisał:

Your error logs seem to indicate some kind of version mismatch. Is it
possible for you to test LS 1.3.2 against ES 0.90.9 and take a sample of
raw logs from those 3 days and test them through to see if those 3 days
work in Kibana? The reason I ask is because LS 1.3.2 (specifically the
elasticsearch output) was built using the binaries from ES 0.90.9.

Thanks.

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some tests
for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3
days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name* ... a couple of other
    columns*
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet [0]:
    (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type, _id,
    _score).
    But when I 'randomly' select columns and overview their raw json they
    look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to elasticsearch
and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some tests
for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3
days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name* ... a couple of other
    columns*
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet [0]:
    (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type, _id,
    _score).
    But when I 'randomly' select columns and overview their raw json they
    look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to elasticsearch
and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/2f25e8f8-7c62-4474-a7a4-ee64a433eeca%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Are you sure you didn't run out of disk space or file handles at some
stage, or have an OOM exception?

On 16 March 2014 16:37, bizzorama bizzorama@gmail.com wrote:

Hi,

it turned out that it was not a problem of ES version (we tested on both
0.90.10 and 0.90.9) but just a ES bug ...
after restarting pc or even just the service indices got broken ... we
found out that this was the case of missing mappings.
We observed that broken indices had their mappings corrupted (only some
default fields were observed).
You can check this by calling: http:\es_address:9200\indexName_mapping

Our mappings were dynamic (not set manually - just figured out by ES when
the records were incoming).

The solution was to add a static mapping file like the one described here:

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-conf-mappings.html(we added the default one).

I just copied mappings from a healty index, made some changes, turned it
to a mapping file and copied to the ES server.

Now everything works just fine.

Regards,
Karol

W dniu niedziela, 16 marca 2014 14:54:00 UTC+1 użytkownik Mac Jouz napisał:

Hi Bizzorama,

I had a similar problem with the same configuration than you gave.
ES ran since the 11th of February and was fed every day at 6:00 AM by 2
LS.
Everything worked well (kibana reports were correct and no data loss)
until
I restarted yesterday ES :frowning:
Among 30 index (1 per day), 4 were unusable and data within kibana report
for the related period were unavailable (same org.elasticsearch.search.
facet.FacetPhaseExecutionException: Facet[0]: (key) field [@timestamp]
not found)

Do you confirm when you downgraded ES to 0.90.9 that you retrieved your
data
(i.e you was able to show your data in kibana reports) ?

I will try to downgrade ES version as you suggested and will let you know
more

Thanks for your answer

Sorry for the delay.

Looks like you were right, after downgrading ES to 0.90.9 i couldn't
reproduce the issue in such manner.

Unfortunately, I found some other problems, and one looks like a blocker
....

After whole ES cluster powerdown, ES just started replaying 'no mapping
for ... ' for each request.

W dniu czwartek, 20 lutego 2014 16:42:20 UTC+1 użytkownik Binh Ly napisał:

Your error logs seem to indicate some kind of version mismatch. Is it
possible for you to test LS 1.3.2 against ES 0.90.9 and take a sample of
raw logs from those 3 days and test them through to see if those 3 days
work in Kibana? The reason I ask is because LS 1.3.2 (specifically the
elasticsearch output) was built using the binaries from ES 0.90.9.

Thanks.

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some tests
for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3
days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name* ... a couple of other
    columns*
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet
    [0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type, _id,
    _score).
    But when I 'randomly' select columns and overview their raw json
    they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to elasticsearch
and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some tests
for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3
days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name* ... a couple of other
    columns*
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet
    [0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type, _id,
    _score).
    But when I 'randomly' select columns and overview their raw json
    they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to elasticsearch
and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/2f25e8f8-7c62-4474-a7a4-ee64a433eeca%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/2f25e8f8-7c62-4474-a7a4-ee64a433eeca%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAPt3XKSE6%3DX-QjwJYzs0Aq7FXOHbE6Vu8Am-5BGvznmAsbFikw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

No, when things are running everything is ok, indexes break during
restart/powerdown
17-03-2014 13:11, "Clinton Gormley" clint@traveljury.com napisał(a):

Are you sure you didn't run out of disk space or file handles at some
stage, or have an OOM exception?

On 16 March 2014 16:37, bizzorama bizzorama@gmail.com wrote:

Hi,

it turned out that it was not a problem of ES version (we tested on both
0.90.10 and 0.90.9) but just a ES bug ...
after restarting pc or even just the service indices got broken ... we
found out that this was the case of missing mappings.
We observed that broken indices had their mappings corrupted (only some
default fields were observed).
You can check this by calling: http:\es_address:9200\indexName_mapping

Our mappings were dynamic (not set manually - just figured out by ES when
the records were incoming).

The solution was to add a static mapping file like the one described here:

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-conf-mappings.html(we added the default one).

I just copied mappings from a healty index, made some changes, turned it
to a mapping file and copied to the ES server.

Now everything works just fine.

Regards,
Karol

W dniu niedziela, 16 marca 2014 14:54:00 UTC+1 użytkownik Mac Jouz
napisał:

Hi Bizzorama,

I had a similar problem with the same configuration than you gave.
ES ran since the 11th of February and was fed every day at 6:00 AM by 2
LS.
Everything worked well (kibana reports were correct and no data loss)
until
I restarted yesterday ES :frowning:
Among 30 index (1 per day), 4 were unusable and data within kibana report
for the related period were unavailable (same org.elasticsearch.search.
facet.FacetPhaseExecutionException: Facet[0]: (key) field [@timestamp]
not found)

Do you confirm when you downgraded ES to 0.90.9 that you retrieved your
data
(i.e you was able to show your data in kibana reports) ?

I will try to downgrade ES version as you suggested and will let you know
more

Thanks for your answer

Sorry for the delay.

Looks like you were right, after downgrading ES to 0.90.9 i couldn't
reproduce the issue in such manner.

Unfortunately, I found some other problems, and one looks like a blocker
....

After whole ES cluster powerdown, ES just started replaying 'no mapping
for ... ' for each request.

W dniu czwartek, 20 lutego 2014 16:42:20 UTC+1 użytkownik Binh Ly
napisał:

Your error logs seem to indicate some kind of version mismatch. Is it
possible for you to test LS 1.3.2 against ES 0.90.9 and take a sample of
raw logs from those 3 days and test them through to see if those 3 days
work in Kibana? The reason I ask is because LS 1.3.2 (specifically the
elasticsearch output) was built using the binaries from ES 0.90.9.

Thanks.

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some
tests for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3
days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name* ... a couple of other
    columns*
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet
    [0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type, _id,
    _score).
    But when I 'randomly' select columns and overview their raw json
    they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to elasticsearch
and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some
tests for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3
days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name* ... a couple of other
    columns*
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet
    [0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type, _id,
    _score).
    But when I 'randomly' select columns and overview their raw json
    they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to elasticsearch
and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/2f25e8f8-7c62-4474-a7a4-ee64a433eeca%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/2f25e8f8-7c62-4474-a7a4-ee64a433eeca%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/7ZwB6SNFkDc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAPt3XKSE6%3DX-QjwJYzs0Aq7FXOHbE6Vu8Am-5BGvznmAsbFikw%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAPt3XKSE6%3DX-QjwJYzs0Aq7FXOHbE6Vu8Am-5BGvznmAsbFikw%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAMSRqaaudS6_xZGiJojLe5Ccr7BpEcJOzuY_39ffZ3m9ry_htQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Hi,

Thanks Karol, changing ES version does not change the problem indeed.

2 complementary questions if I may:

  • You wrote that you copied the mapping file on ES location, did you try a
    way to do so dynamically with a REST call ?
  • Otherwise did you apply the modification for the specific "corrupted"
    index or copy the mapping file in default config ES location (that is to
    say that it was valid for all index ?)

Regards

José

Le dimanche 16 mars 2014 16:37:19 UTC+1, bizzorama a écrit :

Hi,

it turned out that it was not a problem of ES version (we tested on both
0.90.10 and 0.90.9) but just a ES bug ...
after restarting pc or even just the service indices got broken ... we
found out that this was the case of missing mappings.
We observed that broken indices had their mappings corrupted (only some
default fields were observed).
You can check this by calling: http:\es_address:9200\indexName_mapping

Our mappings were dynamic (not set manually - just figured out by ES when
the records were incoming).

The solution was to add a static mapping file like the one described here:

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-conf-mappings.html(we added the default one).

I just copied mappings from a healty index, made some changes, turned it
to a mapping file and copied to the ES server.

Now everything works just fine.

Regards,
Karol

W dniu niedziela, 16 marca 2014 14:54:00 UTC+1 użytkownik Mac Jouz napisał:

Hi Bizzorama,

I had a similar problem with the same configuration than you gave.
ES ran since the 11th of February and was fed every day at 6:00 AM by 2
LS.
Everything worked well (kibana reports were correct and no data loss)
until
I restarted yesterday ES :frowning:
Among 30 index (1 per day), 4 were unusable and data within kibana report
for the related period were unavailable (same
org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet[0]:
(key) field [@timestamp] not found)

Do you confirm when you downgraded ES to 0.90.9 that you retrieved your
data
(i.e you was able to show your data in kibana reports) ?

I will try to downgrade ES version as you suggested and will let you know
more

Thanks for your answer

Sorry for the delay.

Looks like you were right, after downgrading ES to 0.90.9 i couldn't
reproduce the issue in such manner.

Unfortunately, I found some other problems, and one looks like a blocker
....

After whole ES cluster powerdown, ES just started replaying 'no mapping
for ... ' for each request.

W dniu czwartek, 20 lutego 2014 16:42:20 UTC+1 użytkownik Binh Ly napisał:

Your error logs seem to indicate some kind of version mismatch. Is it
possible for you to test LS 1.3.2 against ES 0.90.9 and take a sample of
raw logs from those 3 days and test them through to see if those 3 days
work in Kibana? The reason I ask is because LS 1.3.2 (specifically the
elasticsearch output) was built using the binaries from ES 0.90.9.

Thanks.

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some tests
for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3
days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name* ... a couple of other
    columns*
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet
    [0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type, _id,
    _score).
    But when I 'randomly' select columns and overview their raw json
    they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to elasticsearch
and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some tests
for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3
days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name* ... a couple of other
    columns*
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet
    [0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type, _id,
    _score).
    But when I 'randomly' select columns and overview their raw json
    they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to elasticsearch
and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6c861b61-72c1-4855-b8e5-d3b55afcff92%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Same symptom for me, neither OOM, nor full disk space, only an ES
restart...

Le lundi 17 mars 2014 14:00:11 UTC+1, bizzorama a écrit :

No, when things are running everything is ok, indexes break during
restart/powerdown
17-03-2014 13:11, "Clinton Gormley" <cl...@traveljury.com <javascript:>>
napisał(a):

Are you sure you didn't run out of disk space or file handles at some
stage, or have an OOM exception?

On 16 March 2014 16:37, bizzorama <bizz...@gmail.com <javascript:>>wrote:

Hi,

it turned out that it was not a problem of ES version (we tested on both
0.90.10 and 0.90.9) but just a ES bug ...
after restarting pc or even just the service indices got broken ... we
found out that this was the case of missing mappings.
We observed that broken indices had their mappings corrupted (only some
default fields were observed).
You can check this by calling: http:\es_address:9200\indexName_mapping

Our mappings were dynamic (not set manually - just figured out by ES
when the records were incoming).

The solution was to add a static mapping file like the one described
here:

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-conf-mappings.html(we added the default one).

I just copied mappings from a healty index, made some changes, turned it
to a mapping file and copied to the ES server.

Now everything works just fine.

Regards,
Karol

W dniu niedziela, 16 marca 2014 14:54:00 UTC+1 użytkownik Mac Jouz
napisał:

Hi Bizzorama,

I had a similar problem with the same configuration than you gave.
ES ran since the 11th of February and was fed every day at 6:00 AM by 2
LS.
Everything worked well (kibana reports were correct and no data loss)
until
I restarted yesterday ES :frowning:
Among 30 index (1 per day), 4 were unusable and data within kibana
report
for the related period were unavailable (same org.elasticsearch.search.
facet.FacetPhaseExecutionException: Facet[0]: (key) field [@timestamp]
not found)

Do you confirm when you downgraded ES to 0.90.9 that you retrieved your
data
(i.e you was able to show your data in kibana reports) ?

I will try to downgrade ES version as you suggested and will let you
know
more

Thanks for your answer

Sorry for the delay.

Looks like you were right, after downgrading ES to 0.90.9 i couldn't
reproduce the issue in such manner.

Unfortunately, I found some other problems, and one looks like a
blocker ....

After whole ES cluster powerdown, ES just started replaying 'no mapping
for ... ' for each request.

W dniu czwartek, 20 lutego 2014 16:42:20 UTC+1 użytkownik Binh Ly
napisał:

Your error logs seem to indicate some kind of version mismatch. Is it
possible for you to test LS 1.3.2 against ES 0.90.9 and take a sample of
raw logs from those 3 days and test them through to see if those 3 days
work in Kibana? The reason I ask is because LS 1.3.2 (specifically the
elasticsearch output) was built using the binaries from ES 0.90.9.

Thanks.

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some
tests for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3
days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name* ... a couple of other
    columns*
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet
    [0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type, _id,
    _score).
    But when I 'randomly' select columns and overview their raw json
    they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to elasticsearch
and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some
tests for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3
days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name* ... a couple of other
    columns*
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet
    [0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type, _id,
    _score).
    But when I 'randomly' select columns and overview their raw json
    they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to elasticsearch
and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/2f25e8f8-7c62-4474-a7a4-ee64a433eeca%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/2f25e8f8-7c62-4474-a7a4-ee64a433eeca%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/7ZwB6SNFkDc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAPt3XKSE6%3DX-QjwJYzs0Aq7FXOHbE6Vu8Am-5BGvznmAsbFikw%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAPt3XKSE6%3DX-QjwJYzs0Aq7FXOHbE6Vu8Am-5BGvznmAsbFikw%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/22a9d9a3-d7cf-4b87-8244-b027387005f4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Would either of you be able to write up the steps to reproduce this and to
open an issue about it?

thanks

On 17 March 2014 17:58, Mac Jouz mac.jouz@gmail.com wrote:

Same symptom for me, neither OOM, nor full disk space, only an ES
restart...

Le lundi 17 mars 2014 14:00:11 UTC+1, bizzorama a écrit :

No, when things are running everything is ok, indexes break during
restart/powerdown
17-03-2014 13:11, "Clinton Gormley" cl...@traveljury.com napisał(a):

Are you sure you didn't run out of disk space or file handles at some
stage, or have an OOM exception?

On 16 March 2014 16:37, bizzorama bizz...@gmail.com wrote:

Hi,

it turned out that it was not a problem of ES version (we tested on
both 0.90.10 and 0.90.9) but just a ES bug ...
after restarting pc or even just the service indices got broken ... we
found out that this was the case of missing mappings.
We observed that broken indices had their mappings corrupted (only some
default fields were observed).
You can check this by calling: http:\es_address:9200
indexName_mapping

Our mappings were dynamic (not set manually - just figured out by ES
when the records were incoming).

The solution was to add a static mapping file like the one described
here:
http://www.elasticsearch.org/guide/en/elasticsearch/
reference/current/mapping-conf-mappings.html (we added the default
one).

I just copied mappings from a healty index, made some changes, turned
it to a mapping file and copied to the ES server.

Now everything works just fine.

Regards,
Karol

W dniu niedziela, 16 marca 2014 14:54:00 UTC+1 użytkownik Mac Jouz
napisał:

Hi Bizzorama,

I had a similar problem with the same configuration than you gave.
ES ran since the 11th of February and was fed every day at 6:00 AM by
2 LS.
Everything worked well (kibana reports were correct and no data loss)
until
I restarted yesterday ES :frowning:
Among 30 index (1 per day), 4 were unusable and data within kibana
report
for the related period were unavailable (same org.elasticsearch.search.
facet.FacetPhaseExecutionException: Facet[0]: (key) field
[@timestamp] not found)

Do you confirm when you downgraded ES to 0.90.9 that you retrieved
your data
(i.e you was able to show your data in kibana reports) ?

I will try to downgrade ES version as you suggested and will let you
know
more

Thanks for your answer

Sorry for the delay.

Looks like you were right, after downgrading ES to 0.90.9 i couldn't
reproduce the issue in such manner.

Unfortunately, I found some other problems, and one looks like a
blocker ....

After whole ES cluster powerdown, ES just started replaying 'no
mapping for ... ' for each request.

W dniu czwartek, 20 lutego 2014 16:42:20 UTC+1 użytkownik Binh Ly
napisał:

Your error logs seem to indicate some kind of version mismatch. Is it
possible for you to test LS 1.3.2 against ES 0.90.9 and take a sample of
raw logs from those 3 days and test them through to see if those 3 days
work in Kibana? The reason I ask is because LS 1.3.2 (specifically the
elasticsearch output) was built using the binaries from ES 0.90.9.

Thanks.

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some
tests for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3
days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name* ... a couple of other
    columns*
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet
    [0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type, _id,
    _score).
    But when I 'randomly' select columns and overview their raw json
    they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to
elasticsearch and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some
tests for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3
days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name* ... a couple of other
    columns*
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet
    [0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type, _id,
    _score).
    But when I 'randomly' select columns and overview their raw json
    they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to
elasticsearch and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/2f25e8f8-7c62-4474-a7a4-ee64a433eeca%
40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/2f25e8f8-7c62-4474-a7a4-ee64a433eeca%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/7ZwB6SNFkDc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/CAPt3XKSE6%3DX-QjwJYzs0Aq7FXOHbE6Vu8Am-
5BGvznmAsbFikw%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAPt3XKSE6%3DX-QjwJYzs0Aq7FXOHbE6Vu8Am-5BGvznmAsbFikw%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/22a9d9a3-d7cf-4b87-8244-b027387005f4%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/22a9d9a3-d7cf-4b87-8244-b027387005f4%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAPt3XKSCXwhUkXJt-XhiYvCkT53kWSO8x%2BPc75HK9wi9CYTpSw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Hi, we tried both ways but:
First worked but was temporary and worked as index quickfix (after
powerdown it was lost again), of course we used the rest interfaces to fix
mappings that were already broken (we could not pump all data again so we
had to fix it somehow).

We applied the mapping file as default (for all indexes) to avoid the
problem in future, we knew that all indexes can be started with same
mapping.
17-03-2014 17:56, "Mac Jouz" mac.jouz@gmail.com napisał(a):

Hi,

Thanks Karol, changing ES version does not change the problem indeed.

2 complementary questions if I may:

  • You wrote that you copied the mapping file on ES location, did you try a
    way to do so dynamically with a REST call ?
  • Otherwise did you apply the modification for the specific "corrupted"
    index or copy the mapping file in default config ES location (that is to
    say that it was valid for all index ?)

Regards

José

Le dimanche 16 mars 2014 16:37:19 UTC+1, bizzorama a écrit :

Hi,

it turned out that it was not a problem of ES version (we tested on both
0.90.10 and 0.90.9) but just a ES bug ...
after restarting pc or even just the service indices got broken ... we
found out that this was the case of missing mappings.
We observed that broken indices had their mappings corrupted (only some
default fields were observed).
You can check this by calling: http:\es_address:9200\indexName_mapping

Our mappings were dynamic (not set manually - just figured out by ES when
the records were incoming).

The solution was to add a static mapping file like the one described here:
http://www.elasticsearch.org/guide/en/elasticsearch/
reference/current/mapping-conf-mappings.html (we added the default one).

I just copied mappings from a healty index, made some changes, turned it
to a mapping file and copied to the ES server.

Now everything works just fine.

Regards,
Karol

W dniu niedziela, 16 marca 2014 14:54:00 UTC+1 użytkownik Mac Jouz
napisał:

Hi Bizzorama,

I had a similar problem with the same configuration than you gave.
ES ran since the 11th of February and was fed every day at 6:00 AM by 2
LS.
Everything worked well (kibana reports were correct and no data loss)
until
I restarted yesterday ES :frowning:
Among 30 index (1 per day), 4 were unusable and data within kibana report
for the related period were unavailable (same org.elasticsearch.search.
facet.FacetPhaseExecutionException: Facet[0]: (key) field [@timestamp]
not found)

Do you confirm when you downgraded ES to 0.90.9 that you retrieved your
data
(i.e you was able to show your data in kibana reports) ?

I will try to downgrade ES version as you suggested and will let you know
more

Thanks for your answer

Sorry for the delay.

Looks like you were right, after downgrading ES to 0.90.9 i couldn't
reproduce the issue in such manner.

Unfortunately, I found some other problems, and one looks like a blocker
....

After whole ES cluster powerdown, ES just started replaying 'no mapping
for ... ' for each request.

W dniu czwartek, 20 lutego 2014 16:42:20 UTC+1 użytkownik Binh Ly
napisał:

Your error logs seem to indicate some kind of version mismatch. Is it
possible for you to test LS 1.3.2 against ES 0.90.9 and take a sample of
raw logs from those 3 days and test them through to see if those 3 days
work in Kibana? The reason I ask is because LS 1.3.2 (specifically the
elasticsearch output) was built using the binaries from ES 0.90.9.

Thanks.

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some
tests for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3
days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name* ... a couple of other
    columns*
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet
    [0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type, _id,
    _score).
    But when I 'randomly' select columns and overview their raw json
    they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to elasticsearch
and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some
tests for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3
days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name* ... a couple of other
    columns*
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet
    [0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type, _id,
    _score).
    But when I 'randomly' select columns and overview their raw json
    they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to elasticsearch
and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/7ZwB6SNFkDc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/6c861b61-72c1-4855-b8e5-d3b55afcff92%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/6c861b61-72c1-4855-b8e5-d3b55afcff92%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAMSRqabfHx--fr4p4ERDSW9t7gVWSfUa6uknta%3DY2XhXPZwLgA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Finally I fixed dynamically the broken index but taking account your answer
I'm going to add files to avoid future problems

Thanks Karol

Regards

José

Le lundi 17 mars 2014 19:25:31 UTC+1, bizzorama a écrit :

Hi, we tried both ways but:
First worked but was temporary and worked as index quickfix (after
powerdown it was lost again), of course we used the rest interfaces to fix
mappings that were already broken (we could not pump all data again so we
had to fix it somehow).

We applied the mapping file as default (for all indexes) to avoid the
problem in future, we knew that all indexes can be started with same
mapping.
17-03-2014 17:56, "Mac Jouz" <mac....@gmail.com <javascript:>> napisał(a):

Hi,

Thanks Karol, changing ES version does not change the problem indeed.

2 complementary questions if I may:

  • You wrote that you copied the mapping file on ES location, did you try
    a way to do so dynamically with a REST call ?
  • Otherwise did you apply the modification for the specific "corrupted"
    index or copy the mapping file in default config ES location (that is to
    say that it was valid for all index ?)

Regards

José

Le dimanche 16 mars 2014 16:37:19 UTC+1, bizzorama a écrit :

Hi,

it turned out that it was not a problem of ES version (we tested on both
0.90.10 and 0.90.9) but just a ES bug ...
after restarting pc or even just the service indices got broken ... we
found out that this was the case of missing mappings.
We observed that broken indices had their mappings corrupted (only some
default fields were observed).
You can check this by calling: http:\es_address:9200\indexName_mapping

Our mappings were dynamic (not set manually - just figured out by ES
when the records were incoming).

The solution was to add a static mapping file like the one described
here:
http://www.elasticsearch.org/guide/en/elasticsearch/
reference/current/mapping-conf-mappings.html (we added the default one).

I just copied mappings from a healty index, made some changes, turned it
to a mapping file and copied to the ES server.

Now everything works just fine.

Regards,
Karol

W dniu niedziela, 16 marca 2014 14:54:00 UTC+1 użytkownik Mac Jouz
napisał:

Hi Bizzorama,

I had a similar problem with the same configuration than you gave.
ES ran since the 11th of February and was fed every day at 6:00 AM by 2
LS.
Everything worked well (kibana reports were correct and no data loss)
until
I restarted yesterday ES :frowning:
Among 30 index (1 per day), 4 were unusable and data within kibana
report
for the related period were unavailable (same org.elasticsearch.search.
facet.FacetPhaseExecutionException: Facet[0]: (key) field [@timestamp]
not found)

Do you confirm when you downgraded ES to 0.90.9 that you retrieved your
data
(i.e you was able to show your data in kibana reports) ?

I will try to downgrade ES version as you suggested and will let you
know
more

Thanks for your answer

Sorry for the delay.

Looks like you were right, after downgrading ES to 0.90.9 i couldn't
reproduce the issue in such manner.

Unfortunately, I found some other problems, and one looks like a
blocker ....

After whole ES cluster powerdown, ES just started replaying 'no mapping
for ... ' for each request.

W dniu czwartek, 20 lutego 2014 16:42:20 UTC+1 użytkownik Binh Ly
napisał:

Your error logs seem to indicate some kind of version mismatch. Is it
possible for you to test LS 1.3.2 against ES 0.90.9 and take a sample of
raw logs from those 3 days and test them through to see if those 3 days
work in Kibana? The reason I ask is because LS 1.3.2 (specifically the
elasticsearch output) was built using the binaries from ES 0.90.9.

Thanks.

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some
tests for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3
days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name* ... a couple of other
    columns*
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet
    [0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type, _id,
    _score).
    But when I 'randomly' select columns and overview their raw json
    they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to elasticsearch
and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some
tests for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3
days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name* ... a couple of other
    columns*
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet
    [0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type, _id,
    _score).
    But when I 'randomly' select columns and overview their raw json
    they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to elasticsearch
and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/7ZwB6SNFkDc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/6c861b61-72c1-4855-b8e5-d3b55afcff92%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/6c861b61-72c1-4855-b8e5-d3b55afcff92%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/2c505b6c-5aa2-4fac-963c-82c6a2bda83d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hi,

I am a little late but maybe it brings some closure...I believe you
ran into this: https://github.com/elasticsearch/elasticsearch/pull/5623
The symptoms for this bug are exactly what you describe.

Britta

On Mon, Mar 17, 2014 at 10:07 PM, Mac Jouz mac.jouz@gmail.com wrote:

Finally I fixed dynamically the broken index but taking account your answer
I'm going to add files to avoid future problems

Thanks Karol

Regards

José

Le lundi 17 mars 2014 19:25:31 UTC+1, bizzorama a écrit :

Hi, we tried both ways but:
First worked but was temporary and worked as index quickfix (after
powerdown it was lost again), of course we used the rest interfaces to fix
mappings that were already broken (we could not pump all data again so we
had to fix it somehow).

We applied the mapping file as default (for all indexes) to avoid the
problem in future, we knew that all indexes can be started with same
mapping.

17-03-2014 17:56, "Mac Jouz" mac....@gmail.com napisał(a):

Hi,

Thanks Karol, changing ES version does not change the problem indeed.

2 complementary questions if I may:

  • You wrote that you copied the mapping file on ES location, did you try
    a way to do so dynamically with a REST call ?
  • Otherwise did you apply the modification for the specific "corrupted"
    index or copy the mapping file in default config ES location (that is to say
    that it was valid for all index ?)

Regards

José

Le dimanche 16 mars 2014 16:37:19 UTC+1, bizzorama a écrit :

Hi,

it turned out that it was not a problem of ES version (we tested on both
0.90.10 and 0.90.9) but just a ES bug ...
after restarting pc or even just the service indices got broken ... we
found out that this was the case of missing mappings.
We observed that broken indices had their mappings corrupted (only some
default fields were observed).
You can check this by calling: http:\es_address:9200\indexName_mapping

Our mappings were dynamic (not set manually - just figured out by ES
when the records were incoming).

The solution was to add a static mapping file like the one described
here:

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-conf-mappings.html
(we added the default one).

I just copied mappings from a healty index, made some changes, turned it
to a mapping file and copied to the ES server.

Now everything works just fine.

Regards,
Karol

W dniu niedziela, 16 marca 2014 14:54:00 UTC+1 użytkownik Mac Jouz
napisał:

Hi Bizzorama,

I had a similar problem with the same configuration than you gave.
ES ran since the 11th of February and was fed every day at 6:00 AM by 2
LS.
Everything worked well (kibana reports were correct and no data loss)
until
I restarted yesterday ES :frowning:
Among 30 index (1 per day), 4 were unusable and data within kibana
report
for the related period were unavailable (same
org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet[0]: (key)
field [@timestamp] not found)

Do you confirm when you downgraded ES to 0.90.9 that you retrieved your
data
(i.e you was able to show your data in kibana reports) ?

I will try to downgrade ES version as you suggested and will let you
know
more

Thanks for your answer

Sorry for the delay.

Looks like you were right, after downgrading ES to 0.90.9 i couldn't
reproduce the issue in such manner.

Unfortunately, I found some other problems, and one looks like a
blocker ....

After whole ES cluster powerdown, ES just started replaying 'no mapping
for ... ' for each request.

W dniu czwartek, 20 lutego 2014 16:42:20 UTC+1 użytkownik Binh Ly
napisał:

Your error logs seem to indicate some kind of version mismatch. Is it
possible for you to test LS 1.3.2 against ES 0.90.9 and take a sample of raw
logs from those 3 days and test them through to see if those 3 days work in
Kibana? The reason I ask is because LS 1.3.2 (specifically the elasticsearch
output) was built using the binaries from ES 0.90.9.

Thanks.

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some
tests for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3
days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name ... a couple of other columns
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet
    [0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type, _id,
    _score).
    But when I 'randomly' select columns and overview their raw json
    they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to elasticsearch
and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some
tests for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3
days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name ... a couple of other columns
  • org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet
    [0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type, _id,
    _score).
    But when I 'randomly' select columns and overview their raw json
    they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to elasticsearch
and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/7ZwB6SNFkDc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/6c861b61-72c1-4855-b8e5-d3b55afcff92%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/2c505b6c-5aa2-4fac-963c-82c6a2bda83d%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CALhJbBiBS_EqrdQuG%2BFb2%3DvDkRZOoy7dy4iL0aWkCGgQDjOwFw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Hi,
I am facing the same issue when I include "Histogram" in Kibana and set
@timestamp in the time field.. Here is the debug message I am getting

org.elasticsearch.search.SearchParseException: [kibana-int][3]:
from[-1],size[-1]: Parse Failure [Failed to parse source
[{"facets":{"0":{"date_histogram":{"field":"@timestamp","interval":"1h"},"global":true,"facet_filter":{"fquery":{"query":{"filtered":{"query":{"query_string":{"query":"ERROR"}},"filter":{"bool":{"must":[{"match_all":{}}]}}}}}}},"1":{"date_histogram":{"field":"@timestamp","interval":"1h"},"global":true,"facet_filter":{"fquery":{"query":{"filtered":{"query":{"query_string":{"query":"WARN"}},"filter":{"bool":{"must":[{"match_all":{}}]}}}}}}},"2":{"date_histogram":{"field":"@timestamp","interval":"1h"},"global":true,"facet_filter":{"fquery":{"query":{"filtered":{"query":{"query_string":{"query":""}},"filter":{"bool":{"must":[{"match_all":{}}]}}}}}}}},"size":0}]]
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:634)
at
org.elasticsearch.search.SearchService.createContext(SearchService.java:507)
at
org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:480)
at
org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:252)
at
org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:202)
at
org.elasticsearch.action.search.type.TransportSearchCountAction$AsyncAction.sendExecuteFirstPhase(TransportSearchCountAction.java:70)
at
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:216)
at
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:203)
at
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:186)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.elasticsearch.search.facet.FacetPhaseExecutionException:

Facet [0]: (key) field [@timestamp] not found*
at
org.elasticsearch.search.facet.datehistogram.DateHistogramFacetParser.parse(DateHistogramFacetParser.java:160)
at
org.elasticsearch.search.facet.FacetParseElement.parse(FacetParseElement.java:93)
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:622)
... 11 more

Here is my mapping file content

{
"logstash-2014.04.29" : {
"mappings" : {
"controlserver" : {
"properties" : {
"@timestamp" : {
"type" : "date",
"format" : "dateOptionalTime"
},
"@version" : {
"type" : "string"
},
"class" : {
"type" : "string"
},
"file" : {
"type" : "string"
},
"host" : {
"type" : "string"
},
"message" : {
"type" : "string"
},
"offset" : {
"type" : "string"
},
"severity" : {
"type" : "string"
},
"tags" : {
"type" : "string"
},
"thread" : {
"type" : "string"
},
"type" : {
"type" : "string"
}
}
}
}
}
}

I do see that @timestamp is defined in mapping file. May I Know why am I getting this issue ? I am using ElasticSearch-1.1.0 version, I tested with 0.90.9 as well, still get the same issue.

On Monday, March 31, 2014 9:33:59 AM UTC-7, Britta Weber wrote:

Hi,

I am a little late but maybe it brings some closure...I believe you
ran into this: https://github.com/elasticsearch/elasticsearch/pull/5623
The symptoms for this bug are exactly what you describe.

Britta

On Mon, Mar 17, 2014 at 10:07 PM, Mac Jouz <mac....@gmail.com<javascript:>>
wrote:

Finally I fixed dynamically the broken index but taking account your
answer
I'm going to add files to avoid future problems

Thanks Karol

Regards

José

Le lundi 17 mars 2014 19:25:31 UTC+1, bizzorama a écrit :

Hi, we tried both ways but:
First worked but was temporary and worked as index quickfix (after
powerdown it was lost again), of course we used the rest interfaces to
fix

mappings that were already broken (we could not pump all data again so
we

had to fix it somehow).

We applied the mapping file as default (for all indexes) to avoid the
problem in future, we knew that all indexes can be started with same
mapping.

17-03-2014 17:56, "Mac Jouz" mac....@gmail.com napisał(a):

Hi,

Thanks Karol, changing ES version does not change the problem indeed.

2 complementary questions if I may:

  • You wrote that you copied the mapping file on ES location, did you
    try

a way to do so dynamically with a REST call ?

  • Otherwise did you apply the modification for the specific
    "corrupted"

index or copy the mapping file in default config ES location (that is
to say

that it was valid for all index ?)

Regards

José

Le dimanche 16 mars 2014 16:37:19 UTC+1, bizzorama a écrit :

Hi,

it turned out that it was not a problem of ES version (we tested on
both

0.90.10 and 0.90.9) but just a ES bug ...
after restarting pc or even just the service indices got broken ...
we

found out that this was the case of missing mappings.
We observed that broken indices had their mappings corrupted (only
some

default fields were observed).
You can check this by calling:
http:\es_address:9200\indexName_mapping

Our mappings were dynamic (not set manually - just figured out by ES
when the records were incoming).

The solution was to add a static mapping file like the one described
here:

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-conf-mappings.html

(we added the default one).

I just copied mappings from a healty index, made some changes, turned
it

to a mapping file and copied to the ES server.

Now everything works just fine.

Regards,
Karol

W dniu niedziela, 16 marca 2014 14:54:00 UTC+1 użytkownik Mac Jouz
napisał:

Hi Bizzorama,

I had a similar problem with the same configuration than you gave.
ES ran since the 11th of February and was fed every day at 6:00 AM
by 2

LS.
Everything worked well (kibana reports were correct and no data
loss)

until
I restarted yesterday ES :frowning:
Among 30 index (1 per day), 4 were unusable and data within kibana
report
for the related period were unavailable (same
org.elasticsearch.search.facet.FacetPhaseExecutionException:
Facet[0]: (key)

field [@timestamp] not found)

Do you confirm when you downgraded ES to 0.90.9 that you retrieved
your

data
(i.e you was able to show your data in kibana reports) ?

I will try to downgrade ES version as you suggested and will let you
know
more

Thanks for your answer

Sorry for the delay.

Looks like you were right, after downgrading ES to 0.90.9 i couldn't
reproduce the issue in such manner.

Unfortunately, I found some other problems, and one looks like a
blocker ....

After whole ES cluster powerdown, ES just started replaying 'no
mapping

for ... ' for each request.

W dniu czwartek, 20 lutego 2014 16:42:20 UTC+1 użytkownik Binh Ly
napisał:

Your error logs seem to indicate some kind of version mismatch. Is
it

possible for you to test LS 1.3.2 against ES 0.90.9 and take a
sample of raw

logs from those 3 days and test them through to see if those 3 days
work in

Kibana? The reason I ask is because LS 1.3.2 (specifically the
elasticsearch

output) was built using the binaries from ES 0.90.9.

Thanks.

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some
tests for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for
3

days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes -
failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name ... a couple of other
    columns
  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet

[0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type,
    _id,

_score).
But when I 'randomly' select columns and overview their raw
json

they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to
elasticsearch

and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some
tests for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for
3

days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes -
failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name ... a couple of other
    columns
  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet

[0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type,
    _id,

_score).
But when I 'randomly' select columns and overview their raw
json

they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to
elasticsearch

and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit

https://groups.google.com/d/topic/elasticsearch/7ZwB6SNFkDc/unsubscribe.

To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.

To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/6c861b61-72c1-4855-b8e5-d3b55afcff92%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/2c505b6c-5aa2-4fac-963c-82c6a2bda83d%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/58e849b7-c9e5-4275-b425-bc533cc3c7fb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hi,
I am facing the same issue when I include "Histogram" in Kibana and set
@timestamp in the time field.. Here is the debug message I am getting

org.elasticsearch.search.SearchParseException: [kibana-int][3]:
from[-1],size[-1]: Parse Failure [Failed to parse source
[{"facets":{"0":{"date_histogram":{"field":"@timestamp","interval":"1h"},"global":true,"facet_filter":{"fquery":{"query":{"filtered":{"query":{"query_string":{"query":"ERROR"}},"filter":{"bool":{"must":[{"match_all":{}}]}}}}}}},"1":{"date_histogram":{"field":"@timestamp","interval":"1h"},"global":true,"facet_filter":{"fquery":{"query":{"filtered":{"query":{"query_string":{"query":"WARN"}},"filter":{"bool":{"must":[{"match_all":{}}]}}}}}}},"2":{"date_histogram":{"field":"@timestamp","interval":"1h"},"global":true,"facet_filter":{"fquery":{"query":{"filtered":{"query":{"query_string":{"query":""}},"filter":{"bool":{"must":[{"match_all":{}}]}}}}}}}},"size":0}]]
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:634)
at
org.elasticsearch.search.SearchService.createContext(SearchService.java:507)
at
org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:480)
at
org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:252)
at
org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:202)
at
org.elasticsearch.action.search.type.TransportSearchCountAction$AsyncAction.sendExecuteFirstPhase(TransportSearchCountAction.java:70)
at
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:216)
at
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:203)
at
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:186)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.elasticsearch.search.facet.FacetPhaseExecutionException:
Facet
[0]: (key) field [@timestamp] not found*
at
org.elasticsearch.search.facet.datehistogram.DateHistogramFacetParser.parse(DateHistogramFacetParser.java:160)
at
org.elasticsearch.search.facet.FacetParseElement.parse(FacetParseElement.java:93)
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:622)
... 11 more

Here is my mapping file content

{
"logstash-2014.04.29" : {
"mappings" : {
"X_Server" : {
"properties" : {
"@timestamp" : {
"type" : "date",
"format" : "dateOptionalTime"
},
"@version" : {
"type" : "string"
},
"class" : {
"type" : "string"
},
"file" : {
"type" : "string"
},
"host" : {
"type" : "string"
},
"message" : {
"type" : "string"
},
"offset" : {
"type" : "string"
},
"severity" : {
"type" : "string"
},
"tags" : {
"type" : "string"
},
"thread" : {
"type" : "string"
},
"type" : {
"type" : "string"
}
}
}
}
}
}

I do see that @timestamp is defined in mapping file. May I Know why am I getting this issue ? I am using ElasticSearch-1.1.0 version, I tested with 0.90.9 as well, still get the same issue.

On Monday, March 31, 2014 9:33:59 AM UTC-7, Britta Weber wrote:

Hi,

I am a little late but maybe it brings some closure...I believe you
ran into this: https://github.com/elasticsearch/elasticsearch/pull/5623
The symptoms for this bug are exactly what you describe.

Britta

On Mon, Mar 17, 2014 at 10:07 PM, Mac Jouz <mac....@gmail.com<javascript:>>
wrote:

Finally I fixed dynamically the broken index but taking account your
answer
I'm going to add files to avoid future problems

Thanks Karol

Regards

José

Le lundi 17 mars 2014 19:25:31 UTC+1, bizzorama a écrit :

Hi, we tried both ways but:
First worked but was temporary and worked as index quickfix (after
powerdown it was lost again), of course we used the rest interfaces to
fix

mappings that were already broken (we could not pump all data again so
we

had to fix it somehow).

We applied the mapping file as default (for all indexes) to avoid the
problem in future, we knew that all indexes can be started with same
mapping.

17-03-2014 17:56, "Mac Jouz" mac....@gmail.com napisał(a):

Hi,

Thanks Karol, changing ES version does not change the problem indeed.

2 complementary questions if I may:

  • You wrote that you copied the mapping file on ES location, did you
    try

a way to do so dynamically with a REST call ?

  • Otherwise did you apply the modification for the specific
    "corrupted"

index or copy the mapping file in default config ES location (that is
to say

that it was valid for all index ?)

Regards

José

Le dimanche 16 mars 2014 16:37:19 UTC+1, bizzorama a écrit :

Hi,

it turned out that it was not a problem of ES version (we tested on
both

0.90.10 and 0.90.9) but just a ES bug ...
after restarting pc or even just the service indices got broken ...
we

found out that this was the case of missing mappings.
We observed that broken indices had their mappings corrupted (only
some

default fields were observed).
You can check this by calling:
http:\es_address:9200\indexName_mapping

Our mappings were dynamic (not set manually - just figured out by ES
when the records were incoming).

The solution was to add a static mapping file like the one described
here:

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-conf-mappings.html

(we added the default one).

I just copied mappings from a healty index, made some changes, turned
it

to a mapping file and copied to the ES server.

Now everything works just fine.

Regards,
Karol

W dniu niedziela, 16 marca 2014 14:54:00 UTC+1 użytkownik Mac Jouz
napisał:

Hi Bizzorama,

I had a similar problem with the same configuration than you gave.
ES ran since the 11th of February and was fed every day at 6:00 AM
by 2

LS.
Everything worked well (kibana reports were correct and no data
loss)

until
I restarted yesterday ES :frowning:
Among 30 index (1 per day), 4 were unusable and data within kibana
report
for the related period were unavailable (same
org.elasticsearch.search.facet.FacetPhaseExecutionException:
Facet[0]: (key)

field [@timestamp] not found)

Do you confirm when you downgraded ES to 0.90.9 that you retrieved
your

data
(i.e you was able to show your data in kibana reports) ?

I will try to downgrade ES version as you suggested and will let you
know
more

Thanks for your answer

Sorry for the delay.

Looks like you were right, after downgrading ES to 0.90.9 i couldn't
reproduce the issue in such manner.

Unfortunately, I found some other problems, and one looks like a
blocker ....

After whole ES cluster powerdown, ES just started replaying 'no
mapping

for ... ' for each request.

W dniu czwartek, 20 lutego 2014 16:42:20 UTC+1 użytkownik Binh Ly
napisał:

Your error logs seem to indicate some kind of version mismatch. Is
it

possible for you to test LS 1.3.2 against ES 0.90.9 and take a
sample of raw

logs from those 3 days and test them through to see if those 3 days
work in

Kibana? The reason I ask is because LS 1.3.2 (specifically the
elasticsearch

output) was built using the binaries from ES 0.90.9.

Thanks.

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some
tests for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for
3

days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes -
failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name ... a couple of other
    columns
  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet

[0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type,
    _id,

_score).
But when I 'randomly' select columns and overview their raw
json

they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to
elasticsearch

and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some
tests for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for
3

days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes -
failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name ... a couple of other
    columns
  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet

[0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type,
    _id,

_score).
But when I 'randomly' select columns and overview their raw
json

they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to
elasticsearch

and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit

https://groups.google.com/d/topic/elasticsearch/7ZwB6SNFkDc/unsubscribe.

To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.

To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/6c861b61-72c1-4855-b8e5-d3b55afcff92%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/2c505b6c-5aa2-4fac-963c-82c6a2bda83d%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/5e7bf405-9fad-4cb3-a56a-a5b4444ab48a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hi all,

This is an old post, but had the same issue today.
It is because kibana searches in all indices by default, and kibana-int
(where the kibana interface data is stored)
has no timestamp field in it.

The error is gone after putting the correct index filter in the dashboard
settings.

A.

Op woensdag 30 april 2014 07:53:38 UTC+2 schreef Deepak Jha:

Hi,
I am facing the same issue when I include "Histogram" in Kibana and set
@timestamp in the time field.. Here is the debug message I am getting

org.elasticsearch.search.SearchParseException: [kibana-int][3]:
from[-1],size[-1]: Parse Failure [Failed to parse source
[{"facets":{"0":{"date_histogram":{"field":"@timestamp","interval":"1h"},"global":true,"facet_filter":{"fquery":{"query":{"filtered":{"query":{"query_string":{"query":"ERROR"}},"filter":{"bool":{"must":[{"match_all":{}}]}}}}}}},"1":{"date_histogram":{"field":"@timestamp","interval":"1h"},"global":true,"facet_filter":{"fquery":{"query":{"filtered":{"query":{"query_string":{"query":"WARN"}},"filter":{"bool":{"must":[{"match_all":{}}]}}}}}}},"2":{"date_histogram":{"field":"@timestamp","interval":"1h"},"global":true,"facet_filter":{"fquery":{"query":{"filtered":{"query":{"query_string":{"query":""}},"filter":{"bool":{"must":[{"match_all":{}}]}}}}}}}},"size":0}]]
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:634)
at
org.elasticsearch.search.SearchService.createContext(SearchService.java:507)
at
org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:480)
at
org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:252)
at
org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:202)
at
org.elasticsearch.action.search.type.TransportSearchCountAction$AsyncAction.sendExecuteFirstPhase(TransportSearchCountAction.java:70)
at
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:216)
at
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:203)
at
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:186)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.elasticsearch.search.facet.FacetPhaseExecutionException:
Facet
[0]: (key) field [@timestamp] not found*
at
org.elasticsearch.search.facet.datehistogram.DateHistogramFacetParser.parse(DateHistogramFacetParser.java:160)
at
org.elasticsearch.search.facet.FacetParseElement.parse(FacetParseElement.java:93)
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:622)
... 11 more

Here is my mapping file content

{
"logstash-2014.04.29" : {
"mappings" : {
"X_Server" : {
"properties" : {
"@timestamp" : {
"type" : "date",
"format" : "dateOptionalTime"
},
"@version" : {
"type" : "string"
},
"class" : {
"type" : "string"
},
"file" : {
"type" : "string"
},
"host" : {
"type" : "string"
},
"message" : {
"type" : "string"
},
"offset" : {
"type" : "string"
},
"severity" : {
"type" : "string"
},
"tags" : {
"type" : "string"
},
"thread" : {
"type" : "string"
},
"type" : {
"type" : "string"
}
}
}
}
}
}

I do see that @timestamp is defined in mapping file. May I Know why am I getting this issue ? I am using ElasticSearch-1.1.0 version, I tested with 0.90.9 as well, still get the same issue.

On Monday, March 31, 2014 9:33:59 AM UTC-7, Britta Weber wrote:

Hi,

I am a little late but maybe it brings some closure...I believe you
ran into this: https://github.com/elasticsearch/elasticsearch/pull/5623
The symptoms for this bug are exactly what you describe.

Britta

On Mon, Mar 17, 2014 at 10:07 PM, Mac Jouz mac....@gmail.com wrote:

Finally I fixed dynamically the broken index but taking account your
answer
I'm going to add files to avoid future problems

Thanks Karol

Regards

José

Le lundi 17 mars 2014 19:25:31 UTC+1, bizzorama a écrit :

Hi, we tried both ways but:
First worked but was temporary and worked as index quickfix (after
powerdown it was lost again), of course we used the rest interfaces to
fix

mappings that were already broken (we could not pump all data again so
we

had to fix it somehow).

We applied the mapping file as default (for all indexes) to avoid the
problem in future, we knew that all indexes can be started with same
mapping.

17-03-2014 17:56, "Mac Jouz" mac....@gmail.com napisał(a):

Hi,

Thanks Karol, changing ES version does not change the problem indeed.

2 complementary questions if I may:

  • You wrote that you copied the mapping file on ES location, did you
    try

a way to do so dynamically with a REST call ?

  • Otherwise did you apply the modification for the specific
    "corrupted"

index or copy the mapping file in default config ES location (that is
to say

that it was valid for all index ?)

Regards

José

Le dimanche 16 mars 2014 16:37:19 UTC+1, bizzorama a écrit :

Hi,

it turned out that it was not a problem of ES version (we tested on
both

0.90.10 and 0.90.9) but just a ES bug ...
after restarting pc or even just the service indices got broken ...
we

found out that this was the case of missing mappings.
We observed that broken indices had their mappings corrupted (only
some

default fields were observed).
You can check this by calling:
http:\es_address:9200\indexName_mapping

Our mappings were dynamic (not set manually - just figured out by ES
when the records were incoming).

The solution was to add a static mapping file like the one described
here:

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-conf-mappings.html

(we added the default one).

I just copied mappings from a healty index, made some changes,
turned it

to a mapping file and copied to the ES server.

Now everything works just fine.

Regards,
Karol

W dniu niedziela, 16 marca 2014 14:54:00 UTC+1 użytkownik Mac Jouz
napisał:

Hi Bizzorama,

I had a similar problem with the same configuration than you gave.
ES ran since the 11th of February and was fed every day at 6:00 AM
by 2

LS.
Everything worked well (kibana reports were correct and no data
loss)

until
I restarted yesterday ES :frowning:
Among 30 index (1 per day), 4 were unusable and data within kibana
report
for the related period were unavailable (same
org.elasticsearch.search.facet.FacetPhaseExecutionException:
Facet[0]: (key)

field [@timestamp] not found)

Do you confirm when you downgraded ES to 0.90.9 that you retrieved
your

data
(i.e you was able to show your data in kibana reports) ?

I will try to downgrade ES version as you suggested and will let
you

know
more

Thanks for your answer

Sorry for the delay.

Looks like you were right, after downgrading ES to 0.90.9 i
couldn't

reproduce the issue in such manner.

Unfortunately, I found some other problems, and one looks like a
blocker ....

After whole ES cluster powerdown, ES just started replaying 'no
mapping

for ... ' for each request.

W dniu czwartek, 20 lutego 2014 16:42:20 UTC+1 użytkownik Binh Ly
napisał:

Your error logs seem to indicate some kind of version mismatch. Is
it

possible for you to test LS 1.3.2 against ES 0.90.9 and take a
sample of raw

logs from those 3 days and test them through to see if those 3
days work in

Kibana? The reason I ask is because LS 1.3.2 (specifically the
elasticsearch

output) was built using the binaries from ES 0.90.9.

Thanks.

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some
tests for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes
(for 3

days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes -
failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name ... a couple of
    other columns
  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet

[0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type,
    _id,

_score).
But when I 'randomly' select columns and overview their raw
json

they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to
elasticsearch

and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :

Hi,

I've noticed a very disturbing ElasticSearch behaviour ...
my environment is:

1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana

which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some
tests for a longer period (about 15 days).

After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes
(for 3

days to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes -
failes.

Errors read from elasticsearch logs:

  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet[terms]: failed to find mapping for Name ... a couple of
    other columns
  • org.elasticsearch.search.facet.FacetPhaseExecutionException:
    Facet

[0]: (key) field [@timestamp] not found

... generaly all queries end with those errors

When elasticsearch is started we find something like this:

[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name]
Message not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting

And a little observations:

  1. When using elasticsearch-head plugin, when querying records
    'manually', i can see only elasticsearch columns (_index, _type,
    _id,

_score).
But when I 'randomly' select columns and overview their raw
json

they look ok.

2, When I tried to process same data again - everything is ok

Is it possible that some corrupted data found its way to
elasticsearch

and now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...

Best Regards,
Karol

--
You received this message because you are subscribed to a topic in
the

Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit

https://groups.google.com/d/topic/elasticsearch/7ZwB6SNFkDc/unsubscribe.

To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.

To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/6c861b61-72c1-4855-b8e5-d3b55afcff92%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an
email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit

https://groups.google.com/d/msgid/elasticsearch/2c505b6c-5aa2-4fac-963c-82c6a2bda83d%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/c7dc31d2-2af7-4a65-aaed-e5f76d7edd30%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.