Default and Marvel Index are throwing Java errors about @timestamp not existing!

HI. For the last couple of days both my default and Marvel Indexes are very slow to bring up data (default index takes 1.5minutes to show data initially becuase that is what I have set my Kibana request timeout value to be but the Marvel one never loads and in the backend it just keeps scrolling the Java errors below).
I have also noticed lots of the same errors in the /var/log/elasticsearch/cluster01.log logs.
They seem to be all pointing back to .marvel-es-data-1,.marvel-es-1-2016.07.02,.marvel-es-1-2016.07.01 going through all the indexes.

The Java error is:
[2016-07-02 23:28:03,507][DEBUG][action.fieldstats ] [kib01] [.marvel-es-1-2016.07.02][0], node[SHOVw2VORA2xEfnRTqJyBw], [P], v[9], s[STARTED], a[id=sAvWRtdtSx-LmNvKkSXPxw]: failed to execute [org.elasticsearch.action.fieldstats.FieldStatsRequest@3757c141]
RemoteTransportException[[els03][192.168.10.23:9300][indices:data/read/field_stats[s]]]; nested: IllegalArgumentException[field [@timestamp] doesn't exist];
Caused by: java.lang.IllegalArgumentException: field [@timestamp] doesn't exist
at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.shardOperation(TransportFieldStatsTransportAction.java:166)
at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.shardOperation(TransportFieldStatsTransportAction.java:54)
at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:282)
at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:278)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)
at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:300)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

I know I was in Kibana, Settings and then clicked on the default and Marindex and then clicked on 'refresh'. I must have done this on both the default and Marvel indexs which is throwing this error about timestamp not existing "Caused by: java.lang.IllegalArgumentException: field [@timestamp] doesn't exist"

Anybody know how to fix? I am ok to delete all data in my ES if I have to.

Thanks.

Hey,

where you do have that @timestamp configured? In Kibana? Can you try timestamp?

--Alex

Hi. I think that was the timefield name?

I had to get my System up and this afternoon urgently and existing data that was in it was not so important so I blew away all indexes with this command..

curl -XDELETE 'http://localhost:9200/_all'

System is running very fast again now for all indexes and in particular default () and .marvel (I did have to to re do my dashboards for all my indexes but again no biggy).
I just noticed that my .marvel-es-1
index now uses 'timestamp' where as before it was @timestamp.

Still getting java error message but no slowdown?

[2016-07-04 21:25:53,246][DEBUG][action.fieldstats ] [kib02] [.kibana][0], node[SHOVw2VORA2xEfnRTqJyBw], [R], v[4], s[STARTED], a[id=DLRWo4zORF-j0G9ZKNmvzA]: failed to execute [org.elasticsearch.action.fieldstats.FieldStatsRequest@626eb6b]
RemoteTransportException[[els03][192.168.10.23:9300][indices:data/read/field_stats[s]]]; nested: IllegalArgumentException[field [@timestamp] doesn't exist];

Caused by: java.lang.IllegalArgumentException: field [@timestamp] doesn't exist

at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.shardOperation(TransportFieldStatsTransportAction.java:166)
at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.shardOperation(TransportFieldStatsTransportAction.java:54)
at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:282)
at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:278)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)
at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:300)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Maybe they are normal but never noticed them before?

Hey,

I assume you have somewhere in kibana configured that the field to look for dates is the @timestamp field instead of the timestamp field and this causes this exception in the logs, kibana trying to lookup field statistics for a field that does not exist.

This however should not be a performance issue. What can be a performance issue due to that broken lookup is, that you maybe accidentally query all the data from elasticsearch, even though you just want to get data from the last 30minutes (this is just a wild assumption so far, but might be worth checking out).

--Alex

Hey Alex,
I have multiple Indexes some that refer @timestamp and others have timestamp.
e.g:
'' (default index) has both.
.marvel-es-1-
only has timestamp
I have a heap of others for different indexes and they are all referring to only @timestamp.

Is there anywhere in particular I should be looking so these errors go away or sometimes you just can't get rid of every single error?

System speed is not bad so far after deleting everything yesterday even though I have over:
Indices: 25
Shards: 226 (each indices is replicated across 2 nodes and each node has the default 5 copies).
Documents: 68,596,383