Deletion of logstash-* index pattern and re-creation

Hi first post here and new to the ELK stack, so bear with me please... maybe long.....

We have 2 environments that have a three node cluster (prod) and a 2 node cluster (test) which are normally build via ansible. I've been thrown in at the deep end here to pick up management and operations of the ELK stack.

We had some VLAN changes on our test environment which meant that one of the networks was changed and so the nodes stopped talking to each other and also developed another error.whether related I do not know. all the relevant IP address were changed in the various ELK configs

Visible problems were that there were shards displaying on one KIbana web interface but not on the other being displayed. Also both front-ends were saying 'duplicate mapping' Neither was showing any discovery (we are using winbeat and filebeat) .

Rather than just rebuild them both which doesn't teach me anything in supporting them I'd rather find out what the issue is. So I took it upon myself to first of all do some research... I have read up on the various RESTapi Curl commands and will supply some of the output below... Initially as I had read it was possible to delete the index patter (logstash-*) via the Kibana interface.. I did this and there my problems started...!!!

I could;t re-create it!!! Kibana would allow me to set @timestamp as a timefield for the new logstash-* I got a small no entry symbol in the field and could not select the drop down. Now I cannot even get in to the Management screen on either node and it displays the following at the top:

No matching indices found: [index_not_found_exception] no such index, with { resource.type="index_or_alias" & resource.id="logstash-" & index_uuid="na" & index="logstash-" }

I'd expect this as I deleted the index pattern.

After doing some investigation I have found the following via various logs:

curl -XGET hostname:9200/_cluster/health?pretty

{
"cluster_name" : "elastic-cluster",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 1,
"active_shards" : 2,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}

curl hostname:9200/_cat/indices

green open .kibana hdEy6VvJRZOUqh-udhxIDw 1 1 21 2 106.2kb 53.1kb

(The prod nodes have the logstash-date data in there.)

I looked at the tail -f logstash-plain.log

... and found these entries:

[2017-11-30T15:06:54,847][ERROR][logstash.outputs.tcp ] Missing a required setting for the tcp output plugin:

output {
tcp {
host => # SETTING MISSING
...
}
}

But I don't think this is related as the input, filter and output are populated as per the Prod ones.

all the services are started...

as there only appeared to be the default indices in .kibana I tried adding the logstash-* one via the following command:

curl -XPUT hostname:9200/.kibana/index-pattern/logstash-* -d '{"title":"logstash-*","timeFieldName":"@timestamp"}'

Even doing a cat of the indices after this is still didn't show any logstash ones !!!

I am getting a java stack trace on the Dev Tools:

Error: No matching indices found: [index_not_found_exception] no such index, with { resource.type="index_or_alias" & resource.id="logstash-" & index_uuid="na" & index="logstash-" }
at handleMissingIndexPattern (http://10.94.242.126/bundles/kibana.bundle.js?v=14844:27:14338)
at processQueue (http://10.94.242.126/bundles/commons.bundle.js?v=14844:38:23621)
at http://10.94.242.126/bundles/commons.bundle.js?v=14844:38:23888
at Scope.$eval (http://10.94.242.126/bundles/commons.bundle.js?v=14844:39:4619)
at Scope.$digest (http://10.94.242.126/bundles/commons.bundle.js?v=14844:39:2359)
at Scope.$apply (http://10.94.242.126/bundles/commons.bundle.js?v=14844:39:5037)
at done (http://10.94.242.126/bundles/commons.bundle.js?v=14844:37:25027)
at completeRequest (http://10.94.242.126/bundles/commons.bundle.js?v=14844:37:28702)
at XMLHttpRequest.xhr.onload (http://10.94.242.126/bundles/commons.bundle.js?v=14844:37:29634)

I wonder if anyone has any ideas how I re-apply the deleted indices and get the system back up and running without rebuilding it please? The above is just what I have learned in the last few hours..

Many Thanks in advance...

Paul

Mmmmm,

In addition to the above I have found the following running this curl command:

curl -GET 10.94.242.128:9200/.kibana/index-pattern/logstash-*?pretty

{
"_index" : ".kibana",
"_type" : "index-pattern",
"_id" : "logstash-",
"_version" : 3,
"found" : true,
"_source" : {
"title" : "logstash-
",
"timeFieldName" : "@timestamp"
}
}

So it knows about the pattern index but doesn't have any fields configured... The production one has hundreds!! Is it being cached somewhere and therefore locking me out from the web interface?

Hi Paul,

You can't create the logstash-* index pattern in Kibana because it looks like you don't have any logstash data in the cluster that this Kibana is connected to. You see that in your curl hostname:9200/_cat/indices. It only has the .kibana index that Kibana created.

Your logstash config DOES need to have the output host configured to write data to Elasticsearch.

In general, you should NOT write docs to the .kibana index. You shouldn't ever need to unless you're doing something really advanced.

Regards,
Lee

Hi Lee,

Thanks for the feedback.. logstash output host does appear to be configured to push collected logstash data to the elasticsearch cluster (IP addresses below). Here is the output file:

What's the process for testing that data is firstly actually being processed by logstash and secondly being sent to the kibana cluster...?

Many Thanks

output {
elasticsearch {
hosts => ["10.55.209.16:9200","10.94.242.128:9200"]
}

tcp {
mode => "client"
port => "514"
}
}

....and just for reference... here is the input file:

input {

beats {
port => "3516"
ssl => true
ssl_verify_mode => "none"
ssl_certificate => "/etc/logstash/logstash-forwarder.crt"
ssl_key => "/etc/logstash/logstash-forwarder.key"
}

}

Mmmm I'm thinking no clients (windows or linux) are sending data to the test logstash... other than the kibana front end, is there any way to identify what clients are configured, i.e. have the certificate and key to be able to send data to logstash...?

I think you would need to check your beats configuration file and log file to see if they're configured to send data to logstash and if that's working.

The other thing is if you have X-Pack installed (it's a plugin for Elasticsearch, Kibana, and Logstash) you could use the Monitoring feature to view these charts that show events received by Logstash and Events emitted from Logstash to Elasticsearch;

Lastly, the _cat/indices request will show you what indices are in the cluster.

Regards,
Lee

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.