Kibana only display logo no content -elasticsearch error shard

im new to the elastic search,
we use it to capture cisco callmanager cdr record. but it seems stop working recently.
it keeped 2yr of data.

could you please help to get it working again? tried restart the ubuntu server and restart the all three services logstash/kibana/elasticsearch.

log file is saved in /home/ftp directory and it seems everytime i restart the elasticsearch the log is being processed and removed from the FTP location.

but the main problems is the kibana is not showing any content.

could you please help, could you please also let me know the command to use?

root@elk:/var/log/elasticsearch# curl -XGET 'localhost:9200/_cluster/health?pretty'
{
  "cluster_name" : "elasticsearch",
  "status" : "red",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 941,
  "active_shards" : 941,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 6892,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 12.013277160730244
}
root@elk:/var/log/elasticsearch# du
2908140 .
root@elk:/var/log/elasticsearch# df
Filesystem     1K-blocks     Used Available Use% Mounted on
udev            12311040        0  12311040   0% /dev
tmpfs            2468276     1076   2467200   1% /run
/dev/sda2      205372392 56437240 138433164  29% /
tmpfs           12341364        0  12341364   0% /dev/shm
tmpfs               5120        0      5120   0% /run/lock
tmpfs           12341364        0  12341364   0% /sys/fs/cgroup
/dev/loop0         88704    88704         0 100% /snap/core/4486
tmpfs            2468272        0   2468272   0% /run/user/0
root@elk:/var/log/elasticsearch# top
top - 23:34:31 up 19:09,  1 user,  load average: 0.10, 0.14, 0.16
Tasks: 160 total,   1 running,  93 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.0 us,  0.1 sy,  0.1 ni, 98.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 24682732 total,  7904084 free, 10989588 used,  5789060 buff/cache
KiB Swap:        0 total,        0 free,        0 used. 13286440 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 2515 elastic+  20   0 23.327g 9.914g 1.164g S   8.3 42.1 249:09.87 java
 1640 logstash  39  19 6298324 1.269g  20252 S   1.0  5.4  12:13.76 java
 1650 kibana    20   0 1358452 221440  24196 S   0.7  0.9  12:37.23 node
 4295 root      20   0   42956   4112   3356 R   0.7  0.0   0:00.48 top
  437 root      20   0  191380  10816   9464 S   0.3  0.0   0:48.92 vmtoolsd
 3303 root      20   0       0      0      0 I   0.3  0.0   0:17.45 kworker/3:0
[2019-06-12T23:28:11,920][WARN ][o.e.x.m.e.l.LocalExporter] unexpected error while indexing monitoring document
org.elasticsearch.xpack.monitoring.exporter.ExportException: UnavailableShardsException[[.monitoring-kibana-6-2019.06.12][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.monitoring-kibana-6-2019.06.12][0]] containing [index {[.monitoring-kibana-6-2019.06.12][doc][GkkETmsBEBnx3tHnSM4v], source[{"cluster_uuid":"x12UfFGzRp6f6WuChd53vg","timestamp":"2019-06-12T23:27:11.915Z","interval_ms":10000,"type":"kibana_stats","source_node":{"uuid":"ICJPPKtKRd6yTTdBS00w6Q","host":"127.0.0.1","transport_address":"127.0.0.1:9300","ip":"127.0.0.1","name":"ICJPPKt","timestamp":"2019-06-12T23:27:11.915Z"},"kibana_stats":{"kibana":{"uuid":"65dd3d14-9f77-4fb8-b9d0-dac0d1f648fe","name":"elk","index":".kibana","host":"localhost","transport_address":"localhost:5601","version":"6.4.0","snapshot":false,"status":"green"},"usage":{"xpack":{"reporting":{"available":true,"enabled":true,"browser_type":"phantom","_all":0,"csv":{"available":true,"total":0},"printable_pdf":{"available":false,"total":0},"status":{},"lastDay":{"_all":0,"csv":{"available":true,"total":0},"printable_pdf":{"available":false,"total":0},"status":{}},"last7Days":{"_all":0,"csv":{"available":true,"total":0},"printable_pdf":{"available":false,"total":0},"status":{}}}}}}}]}]]]
        at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:128) ~[?:?]
        at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_181]
        at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) ~[?:1.8.0_181]
        at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:1.8.0_181]
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[?:1.8.0_181]
        at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[?:1.8.0_181]
        at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) ~[?:1.8.0_181]
        at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) ~[?:1.8.0_181]
        at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_181]
        at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) ~[?:1.8.0_181]
        at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:129) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$0(LocalBulk.java:111) ~[?:?]
        at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:60) ~[elasticsearch-6.4.0.jar:6.4.0]
        at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:43) ~[elasticsearch-6.4.0.jar:6.4.0]
        at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:85) ~[elasticsearch-6.4.0.jar:6.4.0]
        at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:81) ~[elasticsearch-6.4.0.jar:6.4.0]
        at org.elasticsearch.action.bulk.TransportBulkAction$BulkRequestModifier.lambda$wrapActionListenerIfNeeded$0(TransportBulkAction.java:570) ~[elasticsearch-6.4.0.jar:6.4.0]
        at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:60) [elasticsearch-6.4.0.jar:6.4.0]
        at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:379) [elasticsearch-6.4.0.jar:6.4.0]
        at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onFailure(TransportBulkAction.java:374) [elasticsearch-6.4.0.jar:6.4.0]
        at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:91) [elasticsearch-6.4.0.jar:6.4.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:896) [elasticsearch-6.4.0.jar:6.4.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retry(TransportReplicationAction.java:868) [elasticsearch-6.4.0.jar:6.4.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryBecauseUnavailable(TransportReplicationAction.java:927) [elasticsearch-6.4.0.jar:6.4.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryIfUnavailable(TransportReplicationAction.java:773) [elasticsearch-6.4.0.jar:6.4.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:726) [elasticsearch-6.4.0.jar:6.4.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.4.0.jar:6.4.0]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onTimeout(TransportReplicationAction.java:887) [elasticsearch-6.4.0.jar:6.4.0]
        at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:317) [elasticsearch-6.4.0.jar:6.4.0]
        at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:244) [elasticsearch-6.4.0.jar:6.4.0]
        at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:573) [elasticsearch-6.4.0.jar:6.4.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:624) [elasticsearch-6.4.0.jar:6.4.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
Caused by: org.elasticsearch.action.UnavailableShardsException: [.monitoring-kibana-6-2019.06.12][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest
[[.monitoring-kibana-6-2019.06.12][0]] containing [index {[.monitoring-kibana-6-2019.06.12][doc][GkkETmsBEBnx3tHnSM4v], source[{"cluster_uuid":"x12UfFGzRp6f6WuChd53vg","timestamp":"2019-06-12T23:27:11.915Z","interval_ms":10000,"type":"kibana_stats","source_node":{"uuid":"ICJPPKtKRd6yTTdBS00w6Q","host":"127.0.0.1","transport_address":"127.0.0.1:9300","ip":"127.0.0.1","name":"ICJPPKt","timestamp":"2019-06-12T23:27:11.915Z"},"kibana_stats":{"kibana":{"uuid":"65dd3d14-9f77-4fb8-b9d0-dac0d1f648fe","name":"elk","index":".kibana","host":"localhost","transport_address":"localhost:5601","version":"6.4.0","snapshot":false,"status":"green"},"usage":{"xpack":{"reporting":{"available":true,"enabled":true,"browser_type":"phantom","_all":0,"csv":{"available":true,"total":0},"printable_pdf":{"available":false,"total":0},"status":{},"lastDay":{"_all":0,"csv":{"available":true,"total":0},"printable_pdf":{"available":false,"total":0},"status":{}},"last7Days":{"_all":0,"csv":{"available":true,"total":0},"printable_pdf":{"available":false,"total":0},"status":{}}}}}}}]}]]
        ... 12 more

That's a HUUUUUGE number of shards for a single node and likely causing you a lot of problems.
You should look to reduce that down to a few hundred using the _shrink or reindex APIs.

Please also format your code/logs/config using the </> button, or markdown style back ticks. It helps to make things easy to read which helps us help you :slight_smile:

thanks for the quick reply
will try follow this first

https://www.elastic.co/guide/en/elasticsearch/reference/master/indices-shrink-index.html

my index are done daily and name as lcch-cisco-2019.04.01

do you have a quick command reference guide for this to reduce the shards?

thanks in advance

How big are the indices?

i think 60mb a day for 2yrs, about 42Gb ..

root@elk:/home/ftp# curl -XGET "http://localhost:9200/_cat/shards?v" | more
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                             Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0index                           shard prirep state      docs   store ip        node
lcch-cisco-2018.03.21           1     p      UNASSIGNED
lcch-cisco-2018.03.21           1     r      UNASSIGNED
lcch-cisco-2018.03.21           3     p      UNASSIGNED
lcch-cisco-2018.03.21           3     r      UNASSIGNED
lcch-cisco-2018.03.21           2     p      UNASSIGNED
lcch-cisco-2018.03.21           2     r      UNASSIGNED
lcch-cisco-2018.03.21           4     p      UNASSIGNED
lcch-cisco-2018.03.21           4     r      UNASSIGNED
lcch-cisco-2018.03.21           0     p      UNASSIGNED
lcch-cisco-2018.03.21           0     r      UNASSIGNED


alot of unassigned

root@elk:/home/ftp# curl -XGET 'localhost:9200/_cat/allocation?v&pretty'
shards disk.indices disk.used disk.avail disk.total disk.percent host      ip        node
   935        9.4gb    64.4gb    131.3gb    195.8gb           32 127.0.0.1 127.0.0.1 ICJPPKt
  6910                                                                               UNASSIGNED

Thanks for using the format button there, made it heaps easier for me to read :smiley:

Ok you're probably best off using the _reindex API to merge some of these into monthly with a single shard. So something like;

PUT lcch-cisco-2018.03

POST _reindex
{
  "source": {
    "remote": {
      "host": "http://localhost:9200"
    },
    "index": "lcch-cisco-2018.03.*"
  },
  "dest": {
    "index": "lcch-cisco-2018.03"
  }
}

Note that this assumes you have a template for the index pattern lcch-cisco-*, otherwise when you create the lcch-cisco-2018.03 index, apply your own mapping manually.

this is awesome,

im relaying on COPY AS CURL in the help website quite a bit. anywhere i can convert your code to curl?

Ahh yeah, that's from the Dev Tools Console (which I would recommend using if you have Kibana).

If you replace the PUT with curl -XPUT IP:9200/, then add the API/index after it and then your json.

really sorry to be a pain, kibana is so broken when i click on the dev tools, it show nothing on the right

i have tried below but getting error based on here and your code

https://www.elastic.co/guide/en/elasticsearch/reference/6.4/docs-reindex.html

curl -X POST "localhost:9200/_reindex" -H 'Content-Type: application/json' -d'
{
  "source": {
    "index": "lcch-cisco-2018.03.*"
  },
  "dest": {
    "index": "lcch-cisco-2018.03",
  }
}
'



curl -XPUT localhost:9200/lcch-cisco-2018.03 -H 'Content-Type: application/json' -d'

POST _reindex
{
  "source": {
    "remote": {
      "host": "http://localhost:9200"
    },
    "index": "lcch-cisco-2018.03.*"
  },
  "dest": {
    "index": "lcch-cisco-2018.03"
  }
}

i think i got it

curl -X POST "localhost:9200/_reindex" -H 'Content-Type: application/json' -d'
{
  "source": {
    "index": "lcch-cisco-2019.06.*"
  },
  "dest": {
    "index": "lcch-cisco-2019.06"
  }
}
'

but errors

sts]"},"status":503},{"index":"lcch-cisco-2019.06","type":"lcch-cdr","id":"f3e841c8-6a49-44dc-9dd4-757ee609a7a6","cause":{"type":"unavailable_shards_exception","reason":"[lcch-cisco-2019.06][1] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[lcch-cisco-2019.06][1]] containing [194] requests]"},"status":503},{"index":"lcch-cisco-2019.06","type":"lcch-cdr","id":"02a32fd8-e474-4b8b-bef6-85c41e81e5e6","cause":{"type":"unavailable_shards_exception","reason":"[lcch-cisco-2019.06][2] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[lcch-cisco-2019.06][2]] containing [201] requests]"},"status":503},{"index":"lcch-cisco-2019.06","type":"lcch-cdr","id":"be1cfbe7-a4aa-47c0-be34-7d42fc04d1ed","cause":{"type":"unavailable_shards_exception","reason":"[lcch-cisco-2019.06][2] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[lcch-cisco-2019.06][2]] containing [201] requests]"},"status":503},{"index":"lcch-cisco-2019.06","type":"lcch-cdr","id":"ef0ebe19-8d8c-42b9-8faa-c39de111cd88","cause":{"type":"unavailable_shards_exception","reason":"[lcch-cisco-2019.06][2] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[lcch-cisco-2019.06][2]] containing [201] requests]"},"status":503}]}root@elk:/home/ftp#

What is the state of that new index, it looks like it's unassigned?

Trying to reindex when some of the indices you are migrating from are in a red state wll either fail or provide incorrect results. You may need to close and/or delete indices in order to bring the cluster to at least a yellow state before you can reindex. Another option might be to temporarily reduce the number of shards by setting the number of replicas to 0, which might help the cluster recover faster.

Once you have got your cluster sorted out I would also recommend upgrading. Newer versions have limits on the number of shards per node which would provide you with a warning before you got into a state like this.

thank you so much to both Christian and warkolm for quick reply and good directions.

i managed to have the system back and running by close 2017 and 2018 log first and reindexing to monthly.

code below if anyone having simliar issues.

just wanna ask, for upgrade is it just simply ##apt upgrade? from 6.4 to 7.1?

curl -X POST "localhost:9200/lcch-cisco-2019.02.*/_close"
curl -X POST "localhost:9200/lcch-cisco-2017*/_close"
curl -X POST "localhost:9200/my_index/_open"


curl -XGET 'localhost:9200/_cluster/health?pretty'

From <https://discuss.elastic.co/t/kibana-only-display-logo-no-content-elasticsearch-error-shard/185529> 



curl -X POST "localhost:9200/_reindex" -H 'Content-Type: application/json' -d'
{
  "source": {
    "index": "lcch-cisco-2019.03.*"
  },
  "dest": {
    "index": "lcch-cisco-2019.03"
  }
}
'

##curl -X DELETE localhost:9200/lcch-cisco-2017.04.*

1 Like

Make sure you are deleting the old daily indices too :slight_smile:

Nope, have a read of Upgrade Elasticsearch | Elasticsearch Guide [7.1] | Elastic

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.