Custom format isn't supported

Howdy everyone,

I'm getting an exception in elasticsearch when trying to view logs in kibana. The index is reporting green in /_cat/indices and shard allocation looks good.. I haven't had a problem with this particular index before, so I'm not sure what the issue might be, or how I might track the issue down?

[2016-02-05 09:14:32,407][INFO ][rest.suppressed ] /exchange*/_field_stats Params: {level=indices, index=exchange*} java.lang.UnsupportedOperationException: custom format isn't supported at org.elasticsearch.action.fieldstats.FieldStats$Text.valueOf(FieldStats.java:415) at org.elasticsearch.action.fieldstats.FieldStats$Text.valueOf(FieldStats.java:379) at org.elasticsearch.action.fieldstats.FieldStats.match(FieldStats.java:158) at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.newResponse(TransportFieldStatsTransportAction.java:122) at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.newResponse(TransportFieldStatsTransportAction.java:54) at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$AsyncBroadcastAction.finishHim(TransportBroadcastAction.java:229) at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$AsyncBroadcastAction.onOperation(TransportBroadcastAction.java:194) at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$AsyncBroadcastAction$1.handleResponse(TransportBroadcastAction.java:174) at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$AsyncBroadcastAction$1.handleResponse(TransportBroadcastAction.java:161) at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:185) at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:138) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source)

Kibana is reporting a similar issue..

Error: [unsupported_operation_exception] custom format isn't supported
at respond (http://di-logui-01:5601/bundles/kibana.bundle.js:76820:16)
at checkRespForFailure (http://di-logui-01:5601/bundles/kibana.bundle.js:76783:8)
at http://di-logui-01:5601/bundles/kibana.bundle.js:75401:8
at processQueue (http://di-logui-01:5601/bundles/commons.bundle.js:42358:29)
at http://di-logui-01:5601/bundles/commons.bundle.js:42374:28
at Scope.$eval (http://di-logui-01:5601/bundles/commons.bundle.js:43602:29)
at Scope.$digest (http://di-logui-01:5601/bundles/commons.bundle.js:43413:32)
at Scope.$apply (http://di-logui-01:5601/bundles/commons.bundle.js:43710:25)
at done (http://di-logui-01:5601/bundles/commons.bundle.js:38159:48)
at completeRequest (http://di-logui-01:5601/bundles/commons.bundle.js:38357:8)`
1 Like

Seeing the same thing with Kibana 4.4 and Elasticsearch 2.2. Was woking fine with a specific time period set, now trying to use the time picker and see the error everytime.

The same here, Elasticsearch 2.2, Kibana 4.4...

Could any of you capture the payload for the request that is returning a 500?

Did you solve it?

So I was able to solve this by upgrading both ES and Kibana to the latest versions and making sure to clear my cache and reload my mappings in Kibana. Not sure if that is possible for others but that's how I solved it.

Same error as others in Kibana:

ElasticSearch version 2.2.2:
Kibana version: 4.4.2

Error: [unsupported_operation_exception] custom format isn't supported
at respond (http://localhost:5601/bundles/kibana.bundle.js?v=9732:76349:16)
at checkRespForFailure (http://localhost:5601/bundles/kibana.bundle.js?v=9732:76312:8)
at http://localhost:5601/bundles/kibana.bundle.js?v=9732:74930:8
at processQueue (http://localhost:5601/bundles/commons.bundle.js?v=9732:42357:29)
at http://localhost:5601/bundles/commons.bundle.js?v=9732:42373:28
at Scope.$eval (http://localhost:5601/bundles/commons.bundle.js?v=9732:43601:29)
at Scope.$digest (http://localhost:5601/bundles/commons.bundle.js?v=9732:43412:32)
at Scope.$apply (http://localhost:5601/bundles/commons.bundle.js?v=9732:43709:25)
at done (http://localhost:5601/bundles/commons.bundle.js?v=9732:38158:48)
at completeRequest (http://localhost:5601/bundles/commons.bundle.js?v=9732:38356:8)

Same error as others in Kibana:

ElasticSearch、logstash version 2.3.1:
Kibana version: 4.5.0

It is reproduced every time new data(without new format just new data in tomorrow) is added to the system

Though it is a warning, my kibana cannot be used unless I clean all data and restart it:
[2016-05-13 16:46:32,261][WARN ][rest.suppressed ] /logstash-/_field_stats Params: {index=logstash-, level=indices}
java.lang.UnsupportedOperationException: custom format isn't supported
at org.elasticsearch.action.fieldstats.FieldStats$Long.valueOf(FieldStats.java:272)
at org.elasticsearch.action.fieldstats.FieldStats$Long.valueOf(FieldStats.java:236)
at org.elasticsearch.action.fieldstats.FieldStats.match(FieldStats.java:176)
at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.newResponse(TransportFieldStatsTransportAction.java:122)
at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.newResponse(TransportFieldStatsTransportAction.java:54)
at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$AsyncBroadcastAction.finishHim(TransportBroadcastAction.java:243)
at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$AsyncBroadcastAction.onOperation(TransportBroadcastAction.java:208)
at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$AsyncBroadcastAction$1.handleResponse(TransportBroadcastAction.java:188)
at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$AsyncBroadcastAction$1.handleResponse(TransportBroadcastAction.java:175)
at org.elasticsearch.transport.TransportService$DirectResponseChannel.processResponse(TransportService.java:819)
at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:803)
at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:793)
at org.elasticsearch.transport.DelegatingTransportChannel.sendResponse(DelegatingTransportChannel.java:58)
at org.elasticsearch.transport.RequestHandlerRegistry$TransportChannelWrapper.sendResponse(RequestHandlerRegistry.java:134)
at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:282)
at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$ShardTransportHandler.messageReceived(TransportBroadcastAction.java:278)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

I Got the screen by using the following configurations
Elasticsearch version-2.4.0
Kibana version - 4.6.1
Marvel/Marvel-agent version - 2.4.0<img

Did anyone else get this resolved without doing an upgrade?

goto dir optimize\bundles and todo line 76349 in file kibana.bundle.js

if (errors[status]) {
;//err = new errors[status](parsedBody && parsedBody.error, errorMetadata);
} else {
err = new errors.Generic('unknown error', errorMetadata);
}