Primary shard is not active, timeout

Getting this when I run elasticsearch in my minikube, Does anyone know a work around to this? Thanks.< [2018-08-14T21:00:34,338][WARN ][o.e.x.m.MonitoringService] [es-master-5b4dd45bf8-vkjtj] monitoring execution failed org.elasticsearch.xpack.monitoring.exporter.ExportException: Exception when closing export bulk at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$1$1.&lt;init&gt;(ExportBulk.java:107) ~[?:?] at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$1.onFailure(ExportBulk.java:105) ~[?:?] at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound$1.onResponse(ExportBulk.java:218) ~[?:?] at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound$1.onResponse(ExportBulk.java:212) ~[?:?] at org.elasticsearch.xpack.core.common.IteratingActionListener.onResponse(IteratingActionListener.java:108) ~[?:?] at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.lambda$doFlush$0(ExportBulk.java:176) ~[?:?] at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:68) ~[elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:147) ~[?:?] at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$0(LocalBulk.java:123) ~[?:?] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:60) ~[elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:43) ~[elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:85) ~[elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:81) ~[elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.action.bulk.TransportBulkAction$BulkRequestModifier.lambda$wrapActionListenerIfNeeded$0(TransportBulkAction.java:571) ~[elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:60) [elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:380) [elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onFailure(TransportBulkAction.java:375) [elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:91) [elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:909) [elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retry(TransportReplicationAction.java:879) [elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryBecauseUnavailable(TransportReplicationAction.java:944) [elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryIfUnavailable(TransportReplicationAction.java:781) [elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:734) [elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onTimeout(TransportReplicationAction.java:898) [elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:317) [elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:244) [elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:581) [elasticsearch-6.2.4.jar:6.2.4] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:573) [elasticsearch-6.2.4.jar:6.2.4] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151] Caused by: org.elasticsearch.xpack.monitoring.exporter.ExportException: failed to flush export bulks at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.lambda$doFlush$0(ExportBulk.java:168) ~[?:?] ... 26 more Caused by: org.elasticsearch.xpack.monitoring.exporter.ExportException: bulk [default_local] reports failures when exporting documents at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:136) ~[?:?] ... 24 more />

I found this to be an issue myself when using Minikube. I had to modify the vm.max_map_count param from inside of the Minikube virtual machine.

$ minikube ssh
                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ sudo sysctl vm.max_map_count
vm.max_map_count = 65530

$ sudo sysctl -w vm.max_map_count=262144
vm.max_map_count = 262144

$ sudo sysctl vm.max_map_count
vm.max_map_count = 262144

$ exit

Never the less, the above is alot of take in, in that format.

When you startup the Elasticsearch service, do you face any failures in the k8s dashboard?

The above are xpack monitoring bulk ingestion warnings, not problems, are you later able to curl your service endpoint?

Thanks for the information.

I already have my vm.max_map_count set to 262144. I don't see anything under my Kibana monitoring tab it's probably due to this. Other stuff is working fine.

Okay so troubleshooting efforts need to go towards your xpack monitoring setup.
Can you share more Elasticsearch logging?

All the data, client and master nodes show the same logs, where should I look for more logs?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.