Kibana (code=exited, status=78)

Hello,

I have a problem, Kibana is not starting for reasons that escape me I m not sure how to describe it best so here are the log and config files for Elasticsearch and Kibana as well as the error.

the error

kali ~ » sudo systemctl status kibana.service                                                                                                                                          3 ↵
× kibana.service - Kibana
     Loaded: loaded (/lib/systemd/system/kibana.service; enabled; preset: disabled)
     Active: failed (Result: exit-code) since Fri 2022-11-25 11:31:50 GMT; 1min 6s ago
   Duration: 12.191s
       Docs: https://www.elastic.co
    Process: 2826 ExecStart=/usr/share/kibana/bin/kibana (code=exited, status=78)
   Main PID: 2826 (code=exited, status=78)
        CPU: 12.818s

Nov 25 11:31:47 kali systemd[1]: kibana.service: Consumed 12.818s CPU time.
Nov 25 11:31:50 kali systemd[1]: kibana.service: Scheduled restart job, restart counter is at 3.
Nov 25 11:31:50 kali systemd[1]: Stopped Kibana.
Nov 25 11:31:50 kali systemd[1]: kibana.service: Consumed 12.818s CPU time.
Nov 25 11:31:50 kali systemd[1]: kibana.service: Start request repeated too quickly.
Nov 25 11:31:50 kali systemd[1]: kibana.service: Failed with result 'exit-code'.
Nov 25 11:31:50 kali systemd[1]: Failed to start Kibana.

the config file for elasticsearch

/etc/elasticsearch/elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 192.168.0.10
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# --------------------------------- Readiness ----------------------------------
#
# Enable an unauthenticated TCP readiness endpoint on localhost
#
#readiness.port: 9399
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false

#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features on 03-11-2022 15:25:10
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.security.enabled: false

xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["kali"]

# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0

# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0

#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

the ending of the log file

 /var/log/elasticsearch/elasticsearch.log
[2022-11-25T10:39:30,815][WARN ][stderr                   ] [node-1] Nov 25, 2022 10:39:30 AM org.apache.lucene.store.MMapDirectory lookupProvider
[2022-11-25T10:39:30,815][WARN ][stderr                   ] [node-1] WARNING: You are running with Java 19. To make full use of MMapDirectory, please pass '--enable-preview' to the Java command line.
[2022-11-25T10:39:30,828][INFO ][o.e.e.NodeEnvironment    ] [node-1] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [19.7gb], net total_space [55.2gb], types [ext4]
[2022-11-25T10:39:30,829][INFO ][o.e.e.NodeEnvironment    ] [node-1] heap size [4gb], compressed ordinary object pointers [true]
[2022-11-25T10:39:30,921][INFO ][o.e.n.Node               ] [node-1] node name [node-1], node ID [13VDdadWQDySXafHuJDx1w], cluster name [elasticsearch], roles [data_cold, data, remote_cluster_client, master, data_warm, data_content, transform, data_hot, ml, data_frozen, ingest]
[2022-11-25T10:39:34,966][INFO ][o.e.x.s.Security         ] [node-1] Security is disabled
[2022-11-25T10:39:35,163][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node-1] [controller/2044] [Main.cc@123] controller (64 bit): Version 8.5.0 (Build 3922fab346e761) Copyright (c) 2022 Elasticsearch BV
[2022-11-25T10:39:36,195][INFO ][o.e.t.n.NettyAllocator   ] [node-1] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
[2022-11-25T10:39:36,234][INFO ][o.e.i.r.RecoverySettings ] [node-1] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
[2022-11-25T10:39:36,333][INFO ][o.e.d.DiscoveryModule    ] [node-1] using discovery type [multi-node] and seed hosts providers [settings]
[2022-11-25T10:39:38,316][INFO ][o.e.n.Node               ] [node-1] initialized
[2022-11-25T10:39:38,317][INFO ][o.e.n.Node               ] [node-1] starting ...
[2022-11-25T10:39:38,373][INFO ][o.e.x.s.c.f.PersistentCache] [node-1] persistent cache index loaded
[2022-11-25T10:39:38,375][INFO ][o.e.x.d.l.DeprecationIndexingComponent] [node-1] deprecation component started
[2022-11-25T10:39:38,631][INFO ][o.e.t.TransportService   ] [node-1] publish_address {192.168.0.10:9300}, bound_addresses {192.168.0.10:9300}
[2022-11-25T10:39:39,701][INFO ][o.e.b.BootstrapChecks    ] [node-1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2022-11-25T10:39:39,705][WARN ][o.e.c.c.ClusterBootstrapService] [node-1] this node is locked into cluster UUID [GliKLXCHTqODGCSkjADChQ] but [cluster.initial_master_nodes] is set to [kali]; remove this setting to avoid possible data loss caused by subsequent cluster bootstrap attempts
[2022-11-25T10:39:39,895][INFO ][o.e.c.s.MasterService    ] [node-1] elected-as-master ([1] nodes joined)[_FINISH_ELECTION_, {node-1}{13VDdadWQDySXafHuJDx1w}{R2-GsyeMQDOxxW2_uQQkVA}{node-1}{192.168.0.10}{192.168.0.10:9300}{cdfhilmrstw} completing election], term: 10, version: 270, delta: master node changed {previous [], current [{node-1}{13VDdadWQDySXafHuJDx1w}{R2-GsyeMQDOxxW2_uQQkVA}{node-1}{192.168.0.10}{192.168.0.10:9300}{cdfhilmrstw}]}
[2022-11-25T10:39:39,981][INFO ][o.e.c.s.ClusterApplierService] [node-1] master node changed {previous [], current [{node-1}{13VDdadWQDySXafHuJDx1w}{R2-GsyeMQDOxxW2_uQQkVA}{node-1}{192.168.0.10}{192.168.0.10:9300}{cdfhilmrstw}]}, term: 10, version: 270, reason: Publication{term=10, version=270}
[2022-11-25T10:39:40,036][INFO ][o.e.r.s.FileSettingsService] [node-1] starting file settings watcher ...
[2022-11-25T10:39:40,059][INFO ][o.e.r.s.FileSettingsService] [node-1] file settings service up and running [tid=56]
[2022-11-25T10:39:40,063][INFO ][o.e.h.AbstractHttpServerTransport] [node-1] publish_address {192.168.0.10:9200}, bound_addresses {0.0.0.0:9200}
[2022-11-25T10:39:40,064][INFO ][o.e.n.Node               ] [node-1] started {node-1}{13VDdadWQDySXafHuJDx1w}{R2-GsyeMQDOxxW2_uQQkVA}{node-1}{192.168.0.10}{192.168.0.10:9300}{cdfhilmrstw}{ml.machine_memory=15616499712, xpack.installed=true, ml.allocated_processors_double=2.0, ml.max_jvm_size=4294967296, ml.allocated_processors=2}
[2022-11-25T10:39:40,161][WARN ][o.e.x.s.i.SetSecurityUserProcessor] [node-1] Creating processor [set_security_user] (tag [null]) on field [_security] but authentication is not currently enabled on this cluster  - this processor is likely to fail at runtime if it is used
[2022-11-25T10:39:40,730][INFO ][o.e.l.LicenseService     ] [node-1] license [b783f64c-9009-4a4c-8bf8-a152cb5e015b] mode [basic] - valid
[2022-11-25T10:39:40,736][INFO ][o.e.g.GatewayService     ] [node-1] recovered [13] indices into cluster_state
[2022-11-25T10:39:40,786][INFO ][o.e.h.n.s.HealthNodeTaskExecutor] [node-1] Node [{node-1}{13VDdadWQDySXafHuJDx1w}] is selected as the current health node.
[2022-11-25T10:39:40,788][ERROR][o.e.i.g.GeoIpDownloader  ] [node-1] exception during geoip databases update
org.elasticsearch.ElasticsearchException: not all primary shards of [.geoip_databases] index are active
        at org.elasticsearch.ingest.geoip.GeoIpDownloader.updateDatabases(GeoIpDownloader.java:134) ~[?:?]
        at org.elasticsearch.ingest.geoip.GeoIpDownloader.runDownloader(GeoIpDownloader.java:274) ~[?:?]
        at org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:102) ~[?:?]
        at org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:48) ~[?:?]
        at org.elasticsearch.persistent.NodePersistentTasksExecutor$1.doRun(NodePersistentTasksExecutor.java:42) ~[elasticsearch-8.5.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:892) ~[elasticsearch-8.5.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-8.5.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1589) ~[?:?]
[2022-11-25T10:39:42,171][INFO ][o.e.c.r.a.AllocationService] [node-1] current.health="YELLOW" message="Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana-event-log-8.5.0-000001][0]]])." previous.health="RED" reason="shards started [[.kibana-event-log-8.5.0-000001][0]]"
[2022-11-25T10:39:42,710][INFO ][o.e.i.g.DatabaseNodeService] [node-1] successfully loaded geoip database file [GeoLite2-ASN.mmdb]
[2022-11-25T10:39:42,736][INFO ][o.e.i.g.DatabaseNodeService] [node-1] successfully loaded geoip database file [GeoLite2-Country.mmdb]
[2022-11-25T10:39:44,106][INFO ][o.e.i.g.DatabaseNodeService] [node-1] successfully loaded geoip database file [GeoLite2-City.mmdb]

the kibana config file

/etc/kibana/kibana.yml
### >>>>>>> BACKUP START: Kibana interactive setup (2022-11-03T16:12:36.192Z)

# For more configuration options see the configuration guide for Kibana in
# https://www.elastic.co/guide/index.html

# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "192.168.0.10"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"

# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://192.168.0.10:9200"]

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "elastic"
#elasticsearch.password: "elastic"

# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
#elasticsearch.serviceAccountToken: "AAEAAWVsYXN0aWMva2liYW5hL215LXRva2VuOjRJZkVGOVdnUVhLM0l5TXQ2a3ZrY2c"

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024

# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
#logging.root.level: debug

# Enables you to specify a file where Kibana stores log output.
#logging:
#  appenders:
#    file:
#      type: file
#      fileName: /var/log/kibana/kibana.log
#      layout:
#        type: json
#  root:
#    appenders:
#      - default
#      - file
#  layout:
#    type: json

# Logs queries sent to Elasticsearch.
#logging.loggers:
#  - name: elasticsearch.query
#    level: debug

# Logs http responses.
#logging.loggers:
#  - name: http.server.response
#    level: debug

# Logs system usage information.
#logging.loggers:
#  - name: metrics.ops
#    level: debug

# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: data

# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
#i18n.locale: "en"

# =================== Frequently used (Optional)===================

# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.

# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000

# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb

# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15

# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#unifiedSearch.autocomplete.valueSuggestions.timeout: 1000

# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000

### >>>>>>> BACKUP END: Kibana interactive setup (2022-11-03T16:12:36.192Z)

# This section was automaticallyrgenerated during setup.
server.host: 192.168.0.10
elasticsearch.hosts: ['https://192.168.0.10:9200']
elasticsearch.username: "elastic"
elasticsearch.password: "elastic"
elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE2Njc0OTE5NTU1MTk6dE8yb25CZUtUTUdVVDJXNWs1Y0kzQQ
logging.appenders.file.type: file
logging.appenders.file.fileName: /var/log/kibana/kibana.log #/usr/share/kibana/kibanalog.txt
logging.appenders.file.layout.type: json
logging.root.appenders: [default, file]
pid.file: /run/kibana/kibana.pid
elasticsearch.ssl.certificateAuthorities: [/var/lib/kibana/ca_1667491956188.crt]
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://192.168.0.10:9200'], ca_trusted_fingerprint: 3bdd16889468fc9a2557aaf7df998d42d22a74e37c7f4020a78f322acf2fe9ed}]

and the kibana log last few lines

/var/log/kibana/kibana.log
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T17:36:25.438+00:00","message":"Setting up [125] plugins: [translations,monitoringCollection,licensing,globalSearch,globalSearchProviders,features,mapsEms,licenseApiGuard,usageCollection,taskManager,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,share,screenshotMode,banners,newsfeed,guidedOnboarding,fieldFormats,expressions,dataViews,embeddable,uiActionsEnhanced,charts,esUiShared,customIntegrations,home,searchprofiler,painlessLab,grokdebugger,management,advancedSettings,spaces,security,lists,files,encryptedSavedObjects,cloud,snapshotRestore,screenshotting,telemetry,licenseManagement,eventLog,actions,stackConnectors,console,bfetch,data,watcher,reporting,fileUpload,ingestPipelines,alerting,aiops,unifiedSearch,unifiedFieldList,savedSearch,savedObjects,graph,savedObjectsTagging,savedObjectsManagement,presentationUtil,expressionShape,expressionRevealImage,expressionRepeatImage,expressionMetric,expressionImage,controls,eventAnnotation,dataViewFieldEditor,triggersActionsUi,transform,stackAlerts,ruleRegistry,discover,fleet,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,cloudSecurityPosture,discoverEnhanced,visualizations,canvas,visTypeXy,visTypeVislib,visTypeVega,visTypeTimeseries,rollup,visTypeTimelion,visTypeTagcloud,visTypeTable,visTypeMetric,visTypeHeatmap,visTypeMarkdown,dashboard,dashboardEnhanced,expressionXY,expressionTagcloud,expressionPartitionVis,visTypePie,expressionMetricVis,expressionLegacyMetricVis,expressionHeatmap,expressionGauge,lens,maps,dataVisualizer,cases,timelines,sessionView,kubernetesSecurity,observability,osquery,ml,synthetics,securitySolution,infra,upgradeAssistant,monitoring,logstash,enterpriseSearch,apm,visTypeGauge,dataViewManagement]","log":{"level":"INFO","logger":"plugins-system.standard"},"process":{"pid":6621},"trace":{"id":"764f4efbe7c6af2b26901fbef46a0f04"},"transaction":{"id":"75f8e4b7c1eb11c7"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T17:36:25.464+00:00","message":"TaskManager is identified by the Kibana UUID: a8192398-2306-4f13-b513-a7a3ad822b17","log":{"level":"INFO","logger":"plugins.taskManager"},"process":{"pid":6621},"trace":{"id":"764f4efbe7c6af2b26901fbef46a0f04"},"transaction":{"id":"75f8e4b7c1eb11c7"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T17:36:25.563+00:00","message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.","log":{"level":"WARN","logger":"plugins.security.config"},"process":{"pid":6621},"trace":{"id":"764f4efbe7c6af2b26901fbef46a0f04"},"transaction":{"id":"75f8e4b7c1eb11c7"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T17:36:25.564+00:00","message":"Session cookies will be transmitted over insecure connections. This is not recommended.","log":{"level":"WARN","logger":"plugins.security.config"},"process":{"pid":6621},"trace":{"id":"764f4efbe7c6af2b26901fbef46a0f04"},"transaction":{"id":"75f8e4b7c1eb11c7"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T17:36:25.599+00:00","message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.","log":{"level":"WARN","logger":"plugins.security.config"},"process":{"pid":6621},"trace":{"id":"764f4efbe7c6af2b26901fbef46a0f04"},"transaction":{"id":"75f8e4b7c1eb11c7"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T17:36:25.600+00:00","message":"Session cookies will be transmitted over insecure connections. This is not recommended.","log":{"level":"WARN","logger":"plugins.security.config"},"process":{"pid":6621},"trace":{"id":"764f4efbe7c6af2b26901fbef46a0f04"},"transaction":{"id":"75f8e4b7c1eb11c7"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T17:36:25.618+00:00","message":"Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.","log":{"level":"WARN","logger":"plugins.encryptedSavedObjects"},"process":{"pid":6621},"trace":{"id":"764f4efbe7c6af2b26901fbef46a0f04"},"transaction":{"id":"75f8e4b7c1eb11c7"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T17:36:25.645+00:00","message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.","log":{"level":"WARN","logger":"plugins.actions"},"process":{"pid":6621},"trace":{"id":"764f4efbe7c6af2b26901fbef46a0f04"},"transaction":{"id":"75f8e4b7c1eb11c7"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T17:36:25.758+00:00","message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.","log":{"level":"WARN","logger":"plugins.reporting.config"},"process":{"pid":6621},"trace":{"id":"764f4efbe7c6af2b26901fbef46a0f04"},"transaction":{"id":"75f8e4b7c1eb11c7"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T17:36:25.767+00:00","message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.","log":{"level":"WARN","logger":"plugins.alerting"},"process":{"pid":6621},"trace":{"id":"764f4efbe7c6af2b26901fbef46a0f04"},"transaction":{"id":"75f8e4b7c1eb11c7"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T17:36:25.830+00:00","message":"Installing common resources shared between all indices","log":{"level":"INFO","logger":"plugins.ruleRegistry"},"process":{"pid":6621},"trace":{"id":"764f4efbe7c6af2b26901fbef46a0f04"},"transaction":{"id":"75f8e4b7c1eb11c7"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T17:36:25.890+00:00","message":"Registered task successfully [Task: cloud_security_posture-stats_task]","log":{"level":"INFO","logger":"plugins.cloudSecurityPosture"},"process":{"pid":6621},"trace":{"id":"764f4efbe7c6af2b26901fbef46a0f04"},"transaction":{"id":"75f8e4b7c1eb11c7"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T17:36:26.478+00:00","message":"Chromium sandbox provides an additional layer of protection, but is not supported for Linux Debian 2022.3 OS. Automatically setting 'xpack.screenshotting.browser.chromium.disableSandbox: true'.","log":{"level":"WARN","logger":"plugins.screenshotting.config"},"process":{"pid":6621},"trace":{"id":"764f4efbe7c6af2b26901fbef46a0f04"},"transaction":{"id":"75f8e4b7c1eb11c7"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T17:36:26.556+00:00","message":"Unable to retrieve version information from Elasticsearch nodes. write EPROTO 139945406744512:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:\n","log":{"level":"ERROR","logger":"elasticsearch-service"},"process":{"pid":6621},"trace":{"id":"764f4efbe7c6af2b26901fbef46a0f04"},"transaction":{"id":"75f8e4b7c1eb11c7"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T17:36:27.353+00:00","message":"Browser executable: /usr/share/kibana/x-pack/plugins/screenshotting/chromium/headless_shell-linux_x64/headless_shell","log":{"level":"INFO","logger":"plugins.screenshotting.chromium"},"process":{"pid":6621},"trace":{"id":"764f4efbe7c6af2b26901fbef46a0f04"},"transaction":{"id":"75f8e4b7c1eb11c7"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T17:56:25.833+00:00","message":"Timeout: it took more than 1200000ms","error":{"message":"Timeout: it took more than 1200000ms","type":"Error","stack_trace":"Error: Timeout: it took more than 1200000ms\n    at Timeout._onTimeout (/usr/share/kibana/x-pack/plugins/rule_registry/server/rule_data_plugin_service/resource_installer.js:61:20)\n    at listOnTimeout (node:internal/timers:559:17)\n    at processTimers (node:internal/timers:502:7)"},"log":{"level":"ERROR","logger":"plugins.ruleRegistry"},"process":{"pid":6621},"trace":{"id":"764f4efbe7c6af2b26901fbef46a0f04"},"transaction":{"id":"75f8e4b7c1eb11c7"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T17:56:25.835+00:00","message":"Failure installing common resources shared between all indices. Timeout: it took more than 1200000ms","error":{"message":"Failure installing common resources shared between all indices. Timeout: it took more than 1200000ms","type":"Error","stack_trace":"Error: Failure installing common resources shared between all indices. Timeout: it took more than 1200000ms\n    at ResourceInstaller.installWithTimeout (/usr/share/kibana/x-pack/plugins/rule_registry/server/rule_data_plugin_service/resource_installer.js:75:13)\n    at ResourceInstaller.installCommonResources (/usr/share/kibana/x-pack/plugins/rule_registry/server/rule_data_plugin_service/resource_installer.js:89:5)"},"log":{"level":"ERROR","logger":"plugins.ruleRegistry"},"process":{"pid":6621},"trace":{"id":"764f4efbe7c6af2b26901fbef46a0f04"},"transaction":{"id":"75f8e4b7c1eb11c7"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T18:13:04.021+00:00","message":"Stopping all plugins.","log":{"level":"INFO","logger":"plugins-system.preboot"},"process":{"pid":6621}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T18:13:06.346+00:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":6913},"trace":{"id":"560cd537e3d26b824202bb754514733c"},"transaction":{"id":"0a074cd2b84c1498"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T18:13:22.745+00:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":6945},"trace":{"id":"6bc4e1ffd07c11a8fa60bcd9aa1b03b4"},"transaction":{"id":"ed03a220bef9ea8f"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T18:13:44.198+00:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":6976},"trace":{"id":"22a4e3fbbfd83bcf1ceba858c53f3f08"},"transaction":{"id":"f3a397bcecb59573"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T18:20:39.680+00:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":7296},"trace":{"id":"2a9b6d1b2f0e692f69018a357d8d23d2"},"transaction":{"id":"28dec5bdbbdaa93e"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T18:20:55.120+00:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":7320},"trace":{"id":"a27ef5abb51551480f55cb05af8ea455"},"transaction":{"id":"2e945c432123e9ee"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T18:21:10.369+00:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":7340},"trace":{"id":"6d0e59ce1924ddd41a514da7f40806d3"},"transaction":{"id":"9c7ce4be96d6a1f4"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-25T10:36:29.949+00:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":852},"trace":{"id":"46bb3e9fe5a4b11d957fdd5afdceb14f"},"transaction":{"id":"cc4b350bc2d86a15"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-25T10:37:58.559+00:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":1826},"trace":{"id":"89a9d60135a112d9c6377c885a89b291"},"transaction":{"id":"54da3178861787fe"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-25T10:38:14.443+00:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":1852},"trace":{"id":"bf02491bc0e36c2845dc3cd2610de1a2"},"transaction":{"id":"76ef7180515a0627"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-25T10:38:30.466+00:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":1886},"trace":{"id":"491b044079cd1c680628261b3c2b1278"},"transaction":{"id":"c2e161217b5ebd94"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-25T10:41:27.552+00:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":2189},"trace":{"id":"c60bd70c330cf6406fc9fdb43e0816e4"},"transaction":{"id":"2986d484eef40047"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-25T10:41:44.457+00:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":2216},"trace":{"id":"111e4254e33a73d5904510f38b4bf02d"},"transaction":{"id":"9e4464e98cfe8446"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-25T10:42:02.106+00:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":2287},"trace":{"id":"a959aceb079aaa6931f93caf2197e1d5"},"transaction":{"id":"c1f168550ef19eca"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-25T11:05:32.251+00:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":2426},"trace":{"id":"3058bedf73dae9f430163ada20f059fa"},"transaction":{"id":"0a7e55e8b7efa173"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-25T11:05:48.558+00:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":2457},"trace":{"id":"419ce14036f73752261b252dde456ad8"},"transaction":{"id":"cc4f658a68874961"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-25T11:06:04.372+00:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":2503},"trace":{"id":"69cd1451bb1a6fb5cfc0791039c1ae9d"},"transaction":{"id":"6c265aeeda9a020c"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-25T11:33:05.922+00:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":2860},"trace":{"id":"2af71c24b2cff229b73603d23c244618"},"transaction":{"id":"e033999812e901ac"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-25T11:33:21.834+00:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":2886},"trace":{"id":"080fe3a57ad849df2a2d953f9ca61197"},"transaction":{"id":"e59ad4a865b6cebd"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-25T11:33:37.874+00:00","message":"Kibana process configured with roles: [background_tasks, ui]","log":{"level":"INFO","logger":"node"},"process":{"pid":2904},"trace":{"id":"acc6c5a92c4ded5159383197edfa93b5"},"transaction":{"id":"885b7c4c5a087a99"}}

I tried to solve the issue
by allocating more memory for the jvm on both elasticsearch and kibana

for elasticsearch

/etc/elasticsearch/jvm.options
################################################################
##
## JVM configuration
##
################################################################
##
## WARNING: DO NOT EDIT THIS FILE. If you want to override the
## JVM options in this file, or set any additional options, you
## should create one or more files in the jvm.options.d
## directory containing your adjustments.
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/8.5/jvm-options.html
## for more information.
##
################################################################



################################################################
## IMPORTANT: JVM heap size
################################################################
##
## The heap size is automatically configured by Elasticsearch
## based on the available memory in your system and the roles
## each node is configured to fulfill. If specifying heap is
## required, it should be done through a file in jvm.options.d,
## which should be named with .options suffix, and the min and
## max should be set to the same value. For example, to set the
## heap to 4 GB, create a new file in the jvm.options.d
## directory containing these lines:
##
-Xms4g
-Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/8.5/heap-size.html
## for more information
##
################################################################


################################################################
## Expert settings
################################################################
##
## All settings below here are considered expert settings. Do
## not adjust them unless you understand what you are doing. Do
## not edit them in this file; instead, create a new file in the
## jvm.options.d directory containing your adjustments.
##
################################################################

-XX:+UseG1GC

## JVM temporary directory
-Djava.io.tmpdir=${ES_TMPDIR}

## heap dumps

# generate a heap dump when an allocation from the Java heap fails; heap dumps
# are created in the working directory of the JVM unless an alternative path is
# specified
-XX:+HeapDumpOnOutOfMemoryError

# exit right after heap dump on out of memory error
-XX:+ExitOnOutOfMemoryError

# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=/var/lib/elasticsearch

# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log

## GC logging
-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m

and kibana

/etc/kibana/node.options
## Node command line options
## See `node --help` and `node --v8-options` for available options
## Please note you should specify one option per line

## max size of old space in megabytes
--max-old-space-size=4096

## do not terminate process on unhandled promise rejection
 --unhandled-rejections=warn

I also saw this solved problem for a similar issue IE status=78

and so I followed what was told and created a new separated log file, but it did not work so i reverted back to the original log file.

I am quite lost I hope the community will be able to provide guidance.
Kind regards and thanks in advance ^^.

Hi @barnabe,

Welcome to the forum :slight_smile:

It seems you are globally disabling security in Elasticsearch, but try to set SSL encryption:

 xpack.security.enabled: false
 xpack.security.http.ssl:
  enabled: true
 xpack.security.transport.ssl:
  enabled: true

Effectively xpack.security.enabled: false disables all security

[2022-11-25T10:39:34,966][INFO ][o.e.x.s.Security ] [node-1] Security is disabled

While in Kibana an HTTPS endpoint is configured:

 elasticsearch.hosts: ['https://192.168.0.10:9200']

and the connection to Elasticsearch fails with an SSL error:

{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-11-24T17:36:26.556+00:00","message":"Unable to retrieve version information from Elasticsearch nodes. write EPROTO 139945406744512:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:\n","log":{"level":"ERROR","logger":"elasticsearch-service"},"process":{"pid":6621},"trace":{"id":"764f4efbe7c6af2b26901fbef46a0f04"},"transaction":{"id":"75f8e4b7c1eb11c7"}}

Not sure if that might be not configured as intended during the testing, you can easily check with curl for example if you can get a simple API call working against the Elasticsearch endpoint using HTTP or HTTPS (e.g. with cat health API | Elasticsearch Guide [8.5] | Elastic and adding -u username:password and --cacert [file] as needed).

Kind regards, and a nice weekend!