Elastic Search is getting restarted by few second time period -- Sowing Kibana server is not ready yet

I tried to resatrt the Kibana and the elasticsearch for renewing the certificate of server, after I'm facing the following error. Appreciate if anyone provide the solution for the same ASAP.

* Kibana logs,
Caused by:
kibana           |              export_exception: bulk [default_local] reports failures when exporting documents
kibana           |     at KibanaTransport.request (/usr/share/kibana/node_modules/@elastic/transport/lib/Transport.js:479:27)
kibana           |     at runMicrotasks (<anonymous>)
kibana           |     at processTicksAndRejections (node:internal/process/task_queues:96:5)
kibana           |     at KibanaTransport.request (/usr/share/kibana/node_modules/@kbn/core-elasticsearch-client-server-internal/src/create_transport.js:51:16)
kibana           |     at Monitoring.bulk (/usr/share/kibana/node_modules/@elastic/elasticsearch/lib/api/api/monitoring.js:53:16)
kibana           |     at sendBulkPayload (/usr/share/kibana/node_modules/@kbn/monitoring-plugin/server/kibana_monitoring/lib/send_bulk_payload.js:19:10)
kibana           |     at BulkUploader._onPayload (/usr/share/kibana/node_modules/@kbn/monitoring-plugin/server/kibana_monitoring/bulk_uploader.js:161:12)
kibana           |     at BulkUploader._fetchAndUpload (/usr/share/kibana/node_modules/@kbn/monitoring-plugin/server/kibana_monitoring/bulk_uploader.js:150:9)
kibana           | [2023-10-06T10:38:47.274+00:00][WARN ][plugins.monitoring.monitoring.kibana-monitoring] Unable to bulk upload the stats payload to the local cluster
kibana           | [2023-10-06T10:38:47.447+00:00][ERROR][plugins.taskManager] Failed to poll for work: TimeoutError: Request timed out
kibana           | [2023-10-06T10:38:57.275+00:00][WARN ][plugins.monitoring.monitoring.kibana-monitoring] ResponseError: export_exception
* Elasticsearch logs
elasticsearch    | {"@timestamp":"2023-10-08T22:23:25.230Z", "log.level": "WARN", "message":"unexpected error while indexing monitoring document", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[umbu01lx10114b][generic][T#18]","log.logger":"org.elasticsearch.xpack.monitoring.exporter.local.LocalExporter","elasticsearch.cluster.uuid":"-svwCntkRgGJy-bp0aBZjA","elasticsearch.node.id":"wqsRYRJjQHKFSrCRjzD9Ug","elasticsearch.node.name":"umbu01lx10114b","elasticsearch.cluster.name":"docker-cluster","error.type":"org.elasticsearch.xpack.monitoring.exporter.ExportException","error.message":"org.elasticsearch.action.UnavailableShardsException: [.monitoring-es-7-2023.10.08][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.monitoring-es-7-2023.10.08][0]] containing [119] requests]","error.stack_trace":"org.elasticsearch.xpack.monitoring.exporter.ExportException: org.elasticsearch.action.UnavailableShardsException: [.monitoring-es-7-2023.10.08][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.monitoring-es-7-2023.10.08][0]] containing [119] requests]\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:128)\n\tat java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)\n\tat java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)\n\tat java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:1006)\n\tat java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)\n\tat java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)\n\tat java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)\n\tat java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)\n\tat java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)\n\tat java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:129)\n\tat org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$0(LocalBulk.java:110)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.action.ActionListener$2.onResponse(ActionListener.java:169)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:32)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.client.internal.node.NodeClient$SafelyWrappedActionListener.onResponse(NodeClient.java:160)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.tasks.TaskManager$1.onResponse(TaskManager.java:205)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.tasks.TaskManager$1.onResponse(TaskManager.java:199)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:32)\n\tat org.elasticsearch.security@8.9.1/org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.lambda$applyInternal$2(SecurityActionFilter.java:165)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.action.ActionListenerImplementations$DelegatingFailureActionListener.onResponse(ActionListenerImplementations.java:152)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.action.ActionListenerImplementations$RunBeforeActionListener.onResponse(ActionListenerImplementations.java:235)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:628)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onFailure(TransportBulkAction.java:623)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.client.internal.node.NodeClient$SafelyWrappedActionListener.onFailure(NodeClient.java:170)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.tasks.TaskManager$1.onFailure(TaskManager.java:217)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.action.ActionListenerImplementations.safeAcceptException(ActionListenerImplementations.java:60)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.action.ActionListenerImplementations.safeOnFailure(ActionListenerImplementations.java:72)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.action.DelegatingActionListener.onFailure(DelegatingActionListener.java:27)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.action.support.ContextPreservingActionListener.onFailure(ContextPreservingActionListener.java:39)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.action.ActionListenerImplementations.safeAcceptException(ActionListenerImplementations.java:60)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.action.ActionListenerImplementations.safeOnFailure(ActionListenerImplementations.java:72)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.action.DelegatingActionListener.onFailure(DelegatingActionListener.java:27)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:1016)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retry(TransportReplicationAction.java:988)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryBecauseUnavailable(TransportReplicationAction.java:1048)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:848)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onTimeout(TransportReplicationAction.java:1007)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:355)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:293)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:642)\n\tat org.elasticsearch.server@8.9.1/org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:916)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1623)\nCaused by: org.elasticsearch.action.UnavailableShardsException: [.monitoring-es-7-2023.10.08][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.monitoring-es-7-2023.10.08][0]] containing [119] requests]\n\t... 11 more\n"}

based on that log line from Elasticsearch there is something wrong with the monitoring index in your Elasticsearch instance. What does the Elasticsearch health api/page say? Until that is fixed kibana wil have trouble connecting to Elasticsearch

Hi Marius,

Thanks for responding.

Right now wea are facing that httpd also showing as restarting continuously, unable to check the health of kibana.

Can you please suggest another way to solve this problem. Please find the following logs,

Elasticsearch logs:
ERROR: Elasticsearch exited unexpectedly
java.nio.file.AccessDeniedException: /usr/share/elasticsearch/config/certs/certs_bkp
        at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)
        at java.base/sun.nio.fs.UnixException.asIOException(UnixException.java:115)
        at java.base/sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:477)
        at java.base/java.nio.file.Files.newDirectoryStream(Files.java:481)
        at java.base/java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:301)
        at java.base/java.nio.file.FileTreeWalker.next(FileTreeWalker.java:374)
        at java.base/java.nio.file.Files.walkFileTree(Files.java:2815)
        at org.elasticsearch.server@8.9.1/org.elasticsearch.common.logging.LogConfigurator.configure(LogConfigurator.java:237)
        at org.elasticsearch.server@8.9.1/org.elasticsearch.common.logging.LogConfigurator.configure(LogConfigurator.java:129)
        at org.elasticsearch.server@8.9.1/org.elasticsearch.bootstrap.Elasticsearch.initPhase1(Elasticsearch.java:132)
        at org.elasticsearch.server@8.9.1/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)
HTTPD logs:
[Mon Feb 26 04:56:56.986783 2024] [ssl:info] [pid 1:tid 140093887937352] AH01914: Configuring server elk:443 for SSL protocol
[Mon Feb 26 04:56:56.990352 2024] [ssl:emerg] [pid 1:tid 140093887937352] AH02565: Certificate and private key elk:443:0 from /usr/local/apache2/cert.pem and /usr/local/apache2/cert.key do not match
AH00016: Configuration Failed
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.19.0.2. Set the 'ServerName' directive globally to suppress this message
Kibana logs:
[2023-10-06T21:30:39.246+00:00][WARN ][plugins.monitoring.monitoring.kibana-monitoring] Unable to bulk upload the stats payload to the local cluster
[2023-10-06T21:30:45.419+00:00][ERROR][plugins.taskManager] Failed to poll for work: TimeoutError: Request timed out
[2023-10-06T21:30:49.240+00:00][WARN ][plugins.monitoring.monitoring.kibana-monitoring] ResponseError: export_exception
        Caused by:
                export_exception: bulk [default_local] reports failures when exporting documents
    at KibanaTransport.request (/usr/share/kibana/node_modules/@elastic/transport/lib/Transport.js:479:27)
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at KibanaTransport.request (/usr/share/kibana/node_modules/@kbn/core-elasticsearch-client-server-internal/src/create_transport.js:51:16)
    at Monitoring.bulk (/usr/share/kibana/node_modules/@elastic/elasticsearch/lib/api/api/monitoring.js:53:16)
    at sendBulkPayload (/usr/share/kibana/node_modules/@kbn/monitoring-plugin/server/kibana_monitoring/lib/send_bulk_payload.js:19:10)
    at BulkUploader._onPayload (/usr/share/kibana/node_modules/@kbn/monitoring-plugin/server/kibana_monitoring/bulk_uploader.js:161:12)
    at BulkUploader._fetchAndUpload (/usr/share/kibana/node_modules/@kbn/monitoring-plugin/server/kibana_monitoring/bulk_uploader.js:150:9)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.