Kibana stops fetching the data from indices

Hi guys,
I have strange issue that the kibana stops fetching the data from elastic search cluster. Logstash and elastic cluster works fine I can see new indices created and the new data comes in every second.
please help
Thanks
T

You probably need to provide some additional information if anyone is going to be able to help. What version are you running? Is there anything in the logs? How have you identifierad that no data is fetched?

Im running 5.3.1 for all ELK. There is nothing in the logs. It used to work just fine and we saw the data every 15 mins but I dont see it any more even longer period like 1 hour 1 week or more. As I said elasticsearch and logstash are working fine
thanks

What is the status of Kibana? What does it look like when you open a dashboard? How long time period are you looking at? Does it improve if you show a smaller time period?

What is the size and specification of your cluster? How much data do you have in it?

What is the status of Kibana? up and running (connected to elasticsearch)
What does it look like when you open a dashboard? please check attachment
How long time period are you looking at? different periods
Does it improve if you show a smaller time period? no
What is the size and specification of your cluster? 4 nodes with sharding
How much data do you have in it? 50G
keep in mind that the ELK had been working for around 4months until yesterday
thanks!
kibana|690x370

I actually got this error when trying to reconfigure the same index pattern that used to work
Oct 24 13:23:04 node-79-ny kibana: {"type":"response","@timestamp":"2017-10-24T20:23:04Z","tags":[],"pid":18203,"method":"get","statusCode":404,"req":{"url":"/elasticsearch/dsp-log-ec173_2017-10-24/mapping/field/*?=1508876584172&ignore_unavailable=false&allow_no_indices=false&include_defaults=true","method":"get","headers":{"host":"kibana.x.x.x.x.com","x-real-ip":"x.x.x.x","x-forwarded-for":"x.x.x.x","x-forwarded-proto":"http","connection":"close","accept":"application/json, text/plain, /","kbn-version":"5.3.1","user-agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/60.0.3112.78 Chrome/60.0.3112.78 Safari/537.36","referer":"http://kibana.x.x.x.x.com/app/kibana","accept-encoding":"gzip, deflate","accept-language":"en-US,en;q=0.8"},"remoteAddress":"127.0.0.1","userAgent":"127.0.0.1","referer":"http://kibana.x.x.x.x.com/app/kibana"},"res":{"statusCode":404,"responseTime":11,"contentLength":9},"message":"GET /elasticsearch/dsp-log-ec173_2017-10-24/mapping/field/*?=1508876584172&ignore_unavailable=false&allow_no_indices=false&include_defaults=true 404 11ms - 9.0B"}

basically Kibana is unable to fetch mapping eventhough the elasticsearch cluster does have the indices matching the pattern

Can you provide the full output of the cluster stats API so we can get a better feel for the health and state of the cluster?

here you go
{
"_nodes" : {
"total" : 4,
"successful" : 4,
"failed" : 0
},
"cluster_name" : "es173",
"timestamp" : 1508883456135,
"status" : "yellow",
"indices" : {
"count" : 6,
"shards" : {
"total" : 44,
"primaries" : 26,
"replication" : 0.6923076923076923,
"index" : {
"shards" : {
"min" : 1,
"max" : 10,
"avg" : 7.333333333333333
},
"primaries" : {
"min" : 1,
"max" : 5,
"avg" : 4.333333333333333
},
"replication" : {
"min" : 0.0,
"max" : 1.0,
"avg" : 0.6
}
}
},
"docs" : {
"count" : 163834008,
"deleted" : 0
},
"store" : {
"size" : "134.5gb",
"size_in_bytes" : 144432793904,
"throttle_time" : "0s",
"throttle_time_in_millis" : 0
},
"fielddata" : {
"memory_size" : "0b",
"memory_size_in_bytes" : 0,
"evictions" : 0
},
"query_cache" : {
"memory_size" : "0b",
"memory_size_in_bytes" : 0,
"total_count" : 0,
"hit_count" : 0,
"miss_count" : 0,
"cache_size" : 0,
"cache_count" : 0,
"evictions" : 0
},
"completion" : {
"size" : "0b",
"size_in_bytes" : 0
},
"segments" : {
"count" : 799,
"memory" : "510.7mb",
"memory_in_bytes" : 535602271,
"terms_memory" : "473.8mb",
"terms_memory_in_bytes" : 496856599,
"stored_fields_memory" : "27.2mb",
"stored_fields_memory_in_bytes" : 28604872,
"term_vectors_memory" : "0b",
"term_vectors_memory_in_bytes" : 0,
"norms_memory" : "587.6kb",
"norms_memory_in_bytes" : 601728,
"points_memory" : "7.8mb",
"points_memory_in_bytes" : 8222372,
"doc_values_memory" : "1.2mb",
"doc_values_memory_in_bytes" : 1316700,
"index_writer_memory" : "15.3mb",
"index_writer_memory_in_bytes" : 16109724,
"version_map_memory" : "52.7kb",
"version_map_memory_in_bytes" : 54042,
"fixed_bit_set" : "0b",
"fixed_bit_set_memory_in_bytes" : 0,
"max_unsafe_auto_id_timestamp" : 1508829067807,
"file_sizes" : { }
}
},
"nodes" : {
"count" : {
"total" : 4,
"data" : 4,
"coordinating_only" : 0,
"master" : 4,
"ingest" : 4
},
"versions" : [
"5.3.2",
"5.3.1"
],
"os" : {
"available_processors" : 96,
"allocated_processors" : 96,
"names" : [
{
"name" : "Linux",
"count" : 4
}
],
"mem" : {
"total" : "124.9gb",
"total_in_bytes" : 134208143360,
"free" : "9.2gb",
"free_in_bytes" : 9893441536,
"used" : "115.7gb",
"used_in_bytes" : 124314701824,
"free_percent" : 7,
"used_percent" : 93
}
},
"process" : {
"cpu" : {
"percent" : 2
},
"open_file_descriptors" : {
"min" : 701,
"max" : 737,
"avg" : 719
}
},
"jvm" : {
"max_uptime" : "15.2h",
"max_uptime_in_millis" : 54778055,
"versions" : [
{
"version" : "1.8.0_121",
"vm_name" : "OpenJDK 64-Bit Server VM",
"vm_version" : "25.121-b13",
"vm_vendor" : "Oracle Corporation",
"count" : 4
}
],
"mem" : {
"heap_used" : "3.8gb",
"heap_used_in_bytes" : 4169504552,
"heap_max" : "7.7gb",
"heap_max_in_bytes" : 8303673344
},
"threads" : 960
},
"fs" : {
"total" : "844.3gb",
"total_in_bytes" : 906565025792,
"free" : "414.2gb",
"free_in_bytes" : 444796755968,
"available" : "371.2gb",
"available_in_bytes" : 398651236352
},
"plugins" : [ ],
"network_types" : {
"transport_types" : {
"netty4" : 4
},
"http_types" : {
"netty4" : 4
}
}
}
}

You have mixed versions in the cluster, so you should first make sure all the nodes are on exactly the same version as Elasticsearch can not move shards from newer versions to older. You may also want to increase the heap size a bit as it seems to be only around 2GB per node.

this issue has been fixed
Thanks

What was the issue?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.