Lost or corrupt elasticsearch indicies

Hi,
I have an ELK server running that monitors our production IHS (Apache) web servers. The ELK server was running well until today when I think that elasticsearch ran out of space on the file system used to store the data. Installed versions are as follows

# rpm -qa | grep -E "logstash|elasticsearch"
logstash-2.1.1-1.noarch
elasticsearch-2.1.1-1.noarch
# cat /etc/redhat-release
CentOS release 6.7 (Final)

I have manually removed the indices from elasticsearch as my normal method of using curl -XDELETE wouldn't work.

Now when I go to kibana I get the following message:-

Warning No default index pattern. You must select or create one to continue.

There's a stack trace in the elasticsearch.log file as follows (truncated to keep message within limits):-

[2016-05-26 11:33:06,127][INFO ][rest.suppressed          ] /logstash-*/_mapping/field/* Params: {index=logstash-*, allow_no_indices=false, include_defaults=true, _=1464258787353, fields=*, ignore_unavailable=false}
[logstash-*] IndexNotFoundException[no such index]
        at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:636)
        at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:133)
        at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:77)
        at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsAction.doExecute(TransportGetFieldMappingsAction.java:57)
        at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsAction.doExecute(TransportGetFieldMappingsAction.java:40)
        at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:70)
        at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58)
        at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:347)
        at org.elasticsearch.client.FilterClient.doExecute(FilterClient.java:52)
        at org.elasticsearch.rest.BaseRestHandler$HeadersAndContextCopyClient.doExecute(BaseRestHandler.java:83)
        at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:347)
        at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1183)
        at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.getFieldMappings(AbstractClient.java:1383)
        at org.elasticsearch.rest.action.admin.indices.mapping.get.RestGetFieldMappingAction.handleRequest(RestGetFieldMappingAction.java:66)
        at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:54)
        at org.elasticsearch.rest.RestController.executeHandler(RestController.java:207)
        at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:166)
        at org.elasticsearch.http.HttpServer.internalDispatchRequest(HttpServer.java:128)
        at org.elasticsearch.http.HttpServer$Dispatcher.dispatchRequest(HttpServer.java:86)
        at org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:348)
        at org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:63)
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.messageReceived(HttpPipeliningHandler.java:60)
        at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)

So, how can I recover my ELK server?

Thanks in advance, Steve.

If you have deleted the indices manually what shows in _cat/indices?

Hi,
Thanks for the reply, is that a call I need to make via curl or is it on the server somewhere?

Thanks, Steve.

Hi,
Ok, I have the following in _cat/indices:-


yellow open filebeat-2016.05.27   5 1  1516795 0   2.4gb   2.4gb 
yellow open filebeat-2016.05.26   5 1  6644009 0  10.3gb  10.3gb 
yellow open topbeat-2016.05.27    5 1  6313099 0   1.4gb   1.4gb 
yellow open topbeat-2016.05.26    5 1 10750583 0   2.5gb   2.5gb 
yellow open packetbeat-2016.05.26 5 1  9833834 0   3.5gb   3.5gb 
yellow open packetbeat-2016.05.27 5 1  2186287 0 841.7mb 841.7mb 
yellow open .kibana               1 1      132 0    99kb    99kb 

Thanks, Steve

An update... I don't know how to explain this, but when I restarted my browser the Kibana pages now work and the exceptions in the elasticsearch log have stopped too. Was something persisted in by the browser?

Anyway - problem solved, or at least has gone away.

Thanks, Steve.