Hi Bizzorama,
I had a similar problem with the same configuration than you gave.
ES ran since the 11th of February and was fed every day at 6:00 AM by 2 LS.
Everything worked well (kibana reports were correct and no data loss) until
I restarted yesterday ES
Among 30 index (1 per day), 4 were unusable and data within kibana report
for the related period were unavailable (same
org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet[0]:
(key) field [@timestamp] not found)
Do you confirm when you downgraded ES to 0.90.9 that you retrieved your data
(i.e you was able to show your data in kibana reports) ?
I will try to downgrade ES version as you suggested and will let you know
more
Thanks for your answer
Sorry for the delay.
Looks like you were right, after downgrading ES to 0.90.9 i couldn't
reproduce the issue in such manner.
Unfortunately, I found some other problems, and one looks like a blocker
....
After whole ES cluster powerdown, ES just started replaying 'no mapping for
... ' for each request.
W dniu czwartek, 20 lutego 2014 16:42:20 UTC+1 użytkownik Binh Ly napisał:
Your error logs seem to indicate some kind of version mismatch. Is it
possible for you to test LS 1.3.2 against ES 0.90.9 and take a sample of
raw logs from those 3 days and test them through to see if those 3 days
work in Kibana? The reason I ask is because LS 1.3.2 (specifically the
elasticsearch output) was built using the binaries from ES 0.90.9.
Thanks.
Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :
Hi,
I've noticed a very disturbing Elasticsearch behaviour ...
my environment is:
1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana
which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some tests
for a longer period (about 15 days).
After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3 days
to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.
Errors read from elasticsearch logs:
- org.elasticsearch.search.facet.FacetPhaseExecutionException:
Facet[terms]: failed to find mapping for Name* ... a couple of other
columns*
- org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet [0]:
(key) field [@timestamp] not found
... generaly all queries end with those errors
When elasticsearch is started we find something like this:
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name] Message
not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name] Message
not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name] Message
not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting
And a little observations:
- When using elasticsearch-head plugin, when querying records 'manually',
i can see only elasticsearch columns (_index, _type, _id, _score).
But when I 'randomly' select columns and overview their raw json they
look ok.
2, When I tried to process same data again - everything is ok
Is it possible that some corrupted data found its way to elasticsearch and
now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...
Best Regards,
Karol
Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :
Hi,
I've noticed a very disturbing Elasticsearch behaviour ...
my environment is:
1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
(0.90.10) + kibana
which process about 7 000 000 records per day,
everything worked fine on our test environment, untill we run some tests
for a longer period (about 15 days).
After that time, kibana was unable to show any data.
I did some investigation and it looks like some of the indexes (for 3 days
to be exact) seem to be corrupted.
Now every query from kibana, using those corrupted indexes - failes.
Errors read from elasticsearch logs:
- org.elasticsearch.search.facet.FacetPhaseExecutionException:
Facet[terms]: failed to find mapping for Name* ... a couple of other
columns*
- org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet [0]:
(key) field [@timestamp] not found
... generaly all queries end with those errors
When elasticsearch is started we find something like this:
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name] Message
not fully read (request) for [243445] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name] Message
not fully read (request) for [249943] and action
[cluster/nodeIndexCreated], resetting
[2014-02-07 15:02:08,147][WARN ][transport.netty ] [Name] Message
not fully read (request) for [246740] and action
[cluster/nodeIndexCreated], resetting
And a little observations:
- When using elasticsearch-head plugin, when querying records 'manually',
i can see only elasticsearch columns (_index, _type, _id, _score).
But when I 'randomly' select columns and overview their raw json they
look ok.
2, When I tried to process same data again - everything is ok
Is it possible that some corrupted data found its way to elasticsearch and
now whole index is broken ?
Can this be fixed ? reindexed or sth ?
This data is very importand and can't be lost ...
Best Regards,
Karol
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/19537901-c1fa-4663-bfd3-c5ca02905214%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.