Getting error on starting elasticsearch ingest.geoip.downloader.enabled: false

Hi, I have installed elk stack 8.5.1. with authentication without https on elasticsearch xpack.security.http.ssl: enabled: false keystore.path: certs/http.p12
When i start elasticsearch and kibana it shows active, when i login to kibana and try to access the discover page getting error. The server hangs up and then getting service unavailable error on front end. and in backend getting

[ERROR][o.e.i.g.GeoIpDownloader  ] [ip-10-0-9-223.ap-south-1.compute.internal] exception during geoip databases update
org.elasticsearch.ElasticsearchException: not all primary shards of [.geoip_databases] index are active

1 Like

It is a single node setup. Do i need to change configurations? elasticsearch
is failing every now and then.

Please share your entire elasticsearch.yml

Looks like perhaps you have configured some other settings.

Also please provide the entire startup logs not just 1 line.

The geoip downloader sometime shows some errors before it settles.

Please share your entire elasticsearch.yml and the whole set of startup logs not just 1line.

Looks like perhaps you have customized some other settings

This is my elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: []
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# --------------------------------- Readiness ----------------------------------
#
# Enable an unauthenticated TCP readiness endpoint on localhost
#
#readiness.port: 9399
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false
ingest.geoip.downloader.enabled: false
#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 17-11-2022 09:55:02
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.security.enabled: true

xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: false
  keystore.path: certs/http.p12

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
#cluster.initial_master_nodes: ["ip-10-0-9-223.ap-south-1.compute.internal"]
discovery.type: single-node
# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0



This is kibana.yml

# For more configuration options see the configuration guide for Kibana in
# https://www.elastic.co/guide/index.html

# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"

# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://localhost:9200"]

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "elastic"
#elasticsearch.password: "minutus"

# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
# elasticsearch.serviceAccountToken: "my_token"

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024

# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
#logging.root.level: debug

# Enables you to specify a file where Kibana stores log output.
logging:
  appenders:
    file:
      type: file
      fileName: /var/log/kibana/kibana.log
      layout:
        type: json
  root:
    appenders:
      - default
      - file
#  layout:
#    type: json

# Logs queries sent to Elasticsearch.
#logging.loggers:
#  - name: elasticsearch.query
#    level: debug

# Logs http responses.
#logging.loggers:
#  - name: http.server.response
#    level: debug

# Logs system usage information.
#logging.loggers:
#  - name: metrics.ops
#    level: debug

# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: data

# Specifies the path where Kibana creates the process ID file.
pid.file: /run/kibana/kibana.pid

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
#i18n.locale: "en"

# =================== Frequently used (Optional)===================

# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.

# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000

# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb

# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15

# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#unifiedSearch.autocomplete.valueSuggestions.timeout: 1000

# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000


# This section was automatically generated during setup.
elasticsearch.hosts: ['http://10.0.9.223:9200']
elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE2Njg2ODAyNzY1NjA6QU9sZjU3UlBTNTJRcFpmZnZ4TG9tQQ
elasticsearch.ssl.certificateAuthorities: [/var/lib/kibana/ca_1668680277725.crt]
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['http://10.0.9.223:9200'], ca_trusted_fingerprint: 44ff87aa97797b9aff6cad49f71a4f04a99743f8b46922fee8e84babd8223efa}]

This is the log

 tail -f /var/log/elasticsearch/elasticsearch.log
[2022-11-21T06:28:32,385][INFO ][o.e.c.s.ClusterApplierService] [ip-10-0-9-223.ap-south-1.compute.internal] master node changed {previous [], current [{ip-10-0-9-223.ap-south-1.compute.internal}{7npnQ-SzRfWswbi2HuCbmw}{mxXO8y3JSve3h9ESYIs7hg}{ip-10-0-9-223.ap-south-1.compute.internal}{10.0.9.223}{10.0.9.223:9300}{cdfhilmrstw}]}, term: 29, version: 892, reason: Publication{term=29, version=892}
[2022-11-21T06:28:32,465][INFO ][o.e.r.s.FileSettingsService] [ip-10-0-9-223.ap-south-1.compute.internal] starting file settings watcher ...
[2022-11-21T06:28:32,507][INFO ][o.e.h.AbstractHttpServerTransport] [ip-10-0-9-223.ap-south-1.compute.internal] publish_address {10.0.9.223:9200}, bound_addresses {[::]:9200}
[2022-11-21T06:28:32,508][INFO ][o.e.n.Node               ] [ip-10-0-9-223.ap-south-1.compute.internal] started {ip-10-0-9-223.ap-south-1.compute.internal}{7npnQ-SzRfWswbi2HuCbmw}{mxXO8y3JSve3h9ESYIs7hg}{ip-10-0-9-223.ap-south-1.compute.internal}{10.0.9.223}{10.0.9.223:9300}{cdfhilmrstw}{ml.max_jvm_size=4102029312, ml.allocated_processors=2, ml.machine_memory=8199434240, xpack.installed=true, ml.allocated_processors_double=2.0}
[2022-11-21T06:28:32,497][INFO ][o.e.r.s.FileSettingsService] [ip-10-0-9-223.ap-south-1.compute.internal] file settings service up and running [tid=51]
[2022-11-21T06:28:32,866][INFO ][o.e.l.LicenseService     ] [ip-10-0-9-223.ap-south-1.compute.internal] license [2131957f-667a-421e-95ab-585fc0a1dd2e] mode [basic] - valid
[2022-11-21T06:28:32,881][INFO ][o.e.x.s.a.Realms         ] [ip-10-0-9-223.ap-south-1.compute.internal] license mode is [basic], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]
[2022-11-21T06:28:32,891][INFO ][o.e.g.GatewayService     ] [ip-10-0-9-223.ap-south-1.compute.internal] recovered [21] indices into cluster_state
[2022-11-21T06:28:32,964][INFO ][o.e.h.n.s.HealthNodeTaskExecutor] [ip-10-0-9-223.ap-south-1.compute.internal] Node [{ip-10-0-9-223.ap-south-1.compute.internal}{7npnQ-SzRfWswbi2HuCbmw}] is selected as the current health node.
[2022-11-21T06:28:35,312][INFO ][o.e.c.r.a.AllocationService] [ip-10-0-9-223.ap-south-1.compute.internal] current.health="YELLOW" message="Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.ds-metricbeat-8.4.2-2022.11.17-000001][0]]])." previous.health="RED" reason="shards started [[.ds-metricbeat-8.4.2-2022.11.17-000001][0]]"


[2022-11-21T06:29:49,573][INFO ][o.e.c.m.MetadataDeleteIndexService] [ip-10-0-9-223.ap-south-1.compute.internal] [kibana_syslog2022.11.18/j28bnE6LTSODkOLd7QkrPQ] deleting index
[2022-11-21T06:29:49,574][INFO ][o.e.c.m.MetadataDeleteIndexService] [ip-10-0-9-223.ap-south-1.compute.internal] [elasticsearch_syslog2022.11.18/GkEheWhlSYC60fTfPPz_PA] deleting index
[2022-11-21T06:29:49,574][INFO ][o.e.c.m.MetadataDeleteIndexService] [ip-10-0-9-223.ap-south-1.compute.internal] [elasticsearch_syslog2022.11.21/puqt2R1ZQraE7pKJ2E8p3w] deleting index
[2022-11-21T06:29:49,575][INFO ][o.e.c.m.MetadataDeleteIndexService] [ip-10-0-9-223.ap-south-1.compute.internal] [jenkins_syslog2022.11.18/B3F1KpHtTkiawf4tnfyjZA] deleting index
[2022-11-21T06:29:49,575][INFO ][o.e.c.m.MetadataDeleteIndexService] [ip-10-0-9-223.ap-south-1.compute.internal] [logstash_syslog2022.11.21/9rmjjba1RVKl0KRf6BjzPw] deleting index
[2022-11-21T06:29:49,575][INFO ][o.e.c.m.MetadataDeleteIndexService] [ip-10-0-9-223.ap-south-1.compute.internal] [logstash_syslog2022.11.18/vy_jPuIBSgu3hUHo82D7Vw] deleting index
[2022-11-21T06:29:49,575][INFO ][o.e.c.m.MetadataDeleteIndexService] [ip-10-0-9-223.ap-south-1.compute.internal] [kibana_syslog2022.11.21/XRMk_LW3Q7yypjAZGXOJ7g] deleting index
[2022-11-21T06:29:50,412][INFO ][o.e.c.m.MetadataCreateIndexService] [ip-10-0-9-223.ap-south-1.compute.internal] [logstash_syslog2022.11.21] creating index, cause [auto(bulk api)], templates [], shards [1]/[1]
[2022-11-21T06:29:50,694][INFO ][o.e.c.m.MetadataMappingService] [ip-10-0-9-223.ap-south-1.compute.internal] [logstash_syslog2022.11.21/VrCJA3khRKWFy7y5Lx4O7Q] create_mapping
[2022-11-21T06:29:54,901][INFO ][o.e.c.m.MetadataCreateIndexService] [ip-10-0-9-223.ap-south-1.compute.internal] [elasticsearch_syslog2022.11.21] creating index, cause [auto(bulk api)], templates [], shards [1]/[1]
[2022-11-21T06:29:55,053][INFO ][o.e.c.m.MetadataMappingService] [ip-10-0-9-223.ap-south-1.compute.internal] [elasticsearch_syslog2022.11.21/HW7znSaCRHmX7rcbLjqxig] create_mapping
[2022-11-21T06:30:23,086][INFO ][o.e.c.m.MetadataMappingService] [ip-10-0-9-223.ap-south-1.compute.internal] [logstash_syslog2022.11.21/VrCJA3khRKWFy7y5Lx4O7Q] update_mapping [_doc]


[2022-11-21T06:31:17,279][WARN ][o.e.t.ThreadPool         ] [ip-10-0-9-223.ap-south-1.compute.internal] execution of [ReschedulingRunnable{runnable=org.elasticsearch.watcher.ResourceWatcherService$ResourceMonitor@78ef6630, interval=5s}] took [43412ms] which is above the warn threshold of [5000ms]
[2022-11-21T06:31:36,261][WARN ][o.e.t.ThreadPool         ] [ip-10-0-9-223.ap-south-1.compute.internal] execution of [ReschedulingRunnable{runnable=org.elasticsearch.monitor.jvm.JvmGcMonitorService$1@58abcf40, interval=1s}] took [15397ms] which is above the warn threshold of [5000ms]
[2022-11-21T06:31:57,318][WARN ][o.e.t.ThreadPool         ] [ip-10-0-9-223.ap-south-1.compute.internal] execution of [ReschedulingRunnable{runnable=org.elasticsearch.indices.IndexingMemoryController$ShardsIndicesStatusChecker@7638b22e, interval=5s}] took [19015ms] which is above the warn threshold of [5000ms]
[2022-11-21T06:32:12,134][WARN ][o.e.h.AbstractHttpServerTransport] [ip-10-0-9-223.ap-south-1.compute.internal] handling request [unknownId][POST][/.kibana_task_manager/_update_by_query?ignore_unavailable=true&refresh=true][Netty4HttpChannel{localAddress=/10.0.9.223:9200, remoteAddress=/10.0.9.223:54476}] took [34685ms] which is above the warn threshold of [5000ms]
[2022-11-21T06:32:35,251][WARN ][o.e.t.ThreadPool         ] [ip-10-0-9-223.ap-south-1.compute.internal] execution of [ReschedulingRunnable{runnable=org.elasticsearch.monitor.jvm.JvmGcMonitorService$1@58abcf40, interval=1s}] took [6654ms] which is above the warn threshold of [5000ms]
[2022-11-21T06:32:35,386][WARN ][o.e.t.ThreadPool         ] [ip-10-0-9-223.ap-south-1.compute.internal] timer thread slept for [6.6s/6625ms] on absolute clock which is above the warn threshold of [5000ms]
[2022-11-21T06:32:38,710][WARN ][o.e.t.ThreadPool         ] [ip-10-0-9-223.ap-south-1.compute.internal] timer thread slept for [6.6s/6654209857ns] on relative clock which is above the warn threshold of [5000ms]
[2022-11-21T06:32:52,172][WARN ][o.e.t.ThreadPool         ] [ip-10-0-9-223.ap-south-1.compute.internal] execution of [ReschedulingRunnable{runnable=org.elasticsearch.monitor.jvm.JvmGcMonitorService$1@58abcf40, interval=1s}] took [10072ms] which is above the warn threshold of [5000ms]


When i start elasticsearch and kibana, the services are active. i am able to login to kibana. but after a couple of minutes the server hangs up and elasticsearch service fails. nothing shows us in status elasticsearch -l.

How much RAM is on the server... What else are you running on the server... Kinda looks like the server may be busy be with other processes?

8gb ram. 50gb disk nothing else on the server. on elk+filebeat

I have logstash plugin on jenkins to send logs to elk. i disabled ssl on elasticsearch because i was facing some issues connecting from jenkins to elk. You can find the changes in elasticsearch.yml and kibana.yml. Does this have smehting to do with it?

Meaning Elasticsearch Kibana logstash and filebeat all on the same server... Yes that's a lot.

Elasticsearch will attempt to claim half the RAM just for the JVM. If it's not there it's going to have problems.

That's going to compete with logstash that also wants RAM.

You should probably look at setting the JVM options to maybe 2 gigabytes each or something.

Long story short, I think you have resource competition.

At the very least, start Elasticsearch first... Then everything else... But you should really set the JVM options

1 Like

u mean the jvm.options in /etc/elasticsearch.
/etc/elasticsearch/jvm.options

-Xms2g
-Xmx2g

/etc/logstash/jvm.options
`## JVM configuration

Xms represents the initial size of total heap space

Xmx represents the maximum size of total heap space

-Xms1g
-Xmx1g

`

is this correct?

elasticsearch.log

[2022-11-21T07:40:31,101][INFO ][o.e.p.PluginsService     ] [ip-10-0-9-223.ap-south-1.compute.internal] no plugins loaded
[2022-11-21T07:40:37,214][WARN ][i.n.u.i.PlatformDependent] [ip-10-0-9-223.ap-south-1.compute.internal] Failed to get the temporary directory; falling back to: /tmp
[2022-11-21T07:40:47,277][WARN ][stderr                   ] [ip-10-0-9-223.ap-south-1.compute.internal] Nov 21, 2022 7:40:47 AM org.apache.lucene.store.MMapDirectory lookupProvider
[2022-11-21T07:40:47,278][WARN ][stderr                   ] [ip-10-0-9-223.ap-south-1.compute.internal] WARNING: You are running with Java 19. To make full use of MMapDirectory, please pass '--enable-preview' to the Java command line.
[2022-11-21T07:40:47,308][INFO ][o.e.e.NodeEnvironment    ] [ip-10-0-9-223.ap-south-1.compute.internal] using [1] data paths, mounts [[/ (/dev/nvme0n1p1)]], net usable_space [41.1gb], net total_space [49.9gb], types [xfs]
[2022-11-21T07:40:47,309][INFO ][o.e.e.NodeEnvironment    ] [ip-10-0-9-223.ap-south-1.compute.internal] heap size [3.8gb], compressed ordinary object pointers [true]
[2022-11-21T07:40:49,792][INFO ][o.e.n.Node               ] [ip-10-0-9-223.ap-south-1.compute.internal] node name [ip-10-0-9-223.ap-south-1.compute.internal], node ID [7npnQ-SzRfWswbi2HuCbmw], cluster name [elasticsearch], roles [ingest, data_cold, data, remote_cluster_client, master, data_warm, data_content, transform, data_hot, ml, data_frozen]
[2022-11-21T07:41:00,536][WARN ][i.n.u.i.PlatformDependent] [ip-10-0-9-223.ap-south-1.compute.internal] Failed to get the temporary directory; falling back to: /tmp
[2022-11-21T07:41:06,748][ERROR][o.e.b.Elasticsearch      ] [ip-10-0-9-223.ap-south-1.compute.internal] fatal exception while booting Elasticsearch
java.security.AccessControlException: access denied ("java.io.FilePermission" "/tmp" "read")
        at java.security.AccessControlContext.checkPermission(AccessControlContext.java:485) ~[?:?]
        at java.security.AccessController.checkPermission(AccessController.java:1068) ~[?:?]
        at java.lang.SecurityManager.checkPermission(SecurityManager.java:411) ~[?:?]
        at java.lang.SecurityManager.checkRead(SecurityManager.java:751) ~[?:?]
        at sun.nio.fs.UnixPath.checkRead(UnixPath.java:780) ~[?:?]
        at sun.nio.fs.UnixFileSystemProvider.checkAccess(UnixFileSystemProvider.java:294) ~[?:?]
        at java.nio.file.Files.createDirectories(Files.java:772) ~[?:?]
        at org.elasticsearch.ingest.geoip.DatabaseNodeService.initialize(DatabaseNodeService.java:150) ~[?:?]
        at org.elasticsearch.ingest.geoip.IngestGeoIpPlugin.createComponents(IngestGeoIpPlugin.java:123) ~[?:?]
        at org.elasticsearch.node.Node.lambda$new$16(Node.java:709) ~[elasticsearch-8.5.1.jar:?]
        at org.elasticsearch.plugins.PluginsService.lambda$flatMap$0(PluginsService.java:252) ~[elasticsearch-8.5.1.jar:?]
        at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:273) ~[?:?]
        at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) ~[?:?]
        at java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:722) ~[?:?]
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?]
        at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?]
        at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:575) ~[?:?]
        at java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260) ~[?:?]
        at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:616) ~[?:?]
        at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:622) ~[?:?]
        at java.util.stream.ReferencePipeline.toList(ReferencePipeline.java:627) ~[?:?]
        at org.elasticsearch.node.Node.<init>(Node.java:724) ~[elasticsearch-8.5.1.jar:?]
        at org.elasticsearch.node.Node.<init>(Node.java:318) ~[elasticsearch-8.5.1.jar:?]
        at org.elasticsearch.bootstrap.Elasticsearch$2.<init>(Elasticsearch.java:214) ~[elasticsearch-8.5.1.jar:?]
        at org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:214) ~[elasticsearch-8.5.1.jar:?]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67) ~[elasticsearch-8.5.1.jar:?]

free -m
              total        used        free      shared  buff/cache   available
Mem:           7819        1653        5171           0         994        5986
Swap:             0           0           0

Hey! this worked like a charm! thank you so much. What are the ideal system requirements for single node elk machine which monitors sys logs of 4 to 5 systems like jenkins, nexus, apache server. Currently my machine is 8gb ram and 50 gb hard disk. To what extent can i give the xms and xmx parameters if i face the issue again?

The service fails again today. in the same way. is active when started and fails after sometime. jvm options for elasticsearch is 2g and logstash is 2g. it's a 8gb ram. I am getting system logs from jenkins, zabbix, elk, nexus , apache and many more servers on elk. what should be the ideal ram size? and what is the ideal config for jvm options in this case?

Here is the log

Nov 23 10:52:39 ip-10-0-9-223 logstash: [2022-11-23T10:52:39,494][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elastic:xxxxxx@localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused"}
Nov 23 10:52:44 ip-10-0-9-223 kibana: [2022-11-23T10:52:44.494+00:00][WARN ][plugins.licensing] License information could not be obtained from Elasticsearch due to ConnectionError: connect ECONNREFUSED 10.0.9.223:9200 error
Nov 23 10:52:44 ip-10-0-9-223 logstash: [2022-11-23T10:52:44,497][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused>}
Nov 23 10:52:44 ip-10-0-9-223 logstash: [2022-11-23T10:52:44,497][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elastic:xxxxxx@localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused"}
Nov 23 10:52:49 ip-10-0-9-223 logstash: [2022-11-23T10:52:49,501][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused>}
Nov 23 10:52:49 ip-10-0-9-223 logstash: [2022-11-23T10:52:49,501][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elastic:xxxxxx@localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused"}
Nov 23 10:52:49 ip-10-0-9-223 filebeat: {"log.level":"info","@timestamp":"2022-11-23T10:52:49.935Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":186},"message":"Non-zero metrics in the last 30s","service.name":"filebeat","monitoring":{"metrics":{"beat":{"cpu":{"system":{"ticks":38500},"total":{"ticks":939280,"time":{"ms":10},"value":939280},"user":{"ticks":900780,"time":{"ms":10}}},"handles":{"limit":{"hard":65535,"soft":65535},"open":17},"info":{"ephemeral_id":"75357dfd-dce0-4b57-853c-7a7c5458e0c1","uptime":{"ms":20130639},"version":"8.5.1"},"memstats":{"gc_next":1867247752,"memory_alloc":934228032,"memory_total":121491323904,"rss":1154785280},"runtime":{"goroutines":92}},"filebeat":{"harvester":{"open_files":6,"running":6}},"libbeat":{"config":{"module":{"running":2}},"output":{"events":{"active":600},"read":{"bytes":36}},"pipeline":{"clients":7,"events":{"active":4101}}},"registrar":{"states":{"current":17}},"system":{"load":{"1":0,"15":0,"5":0,"norm":{"1":0,"15":0,"5":0}}}},"ecs.version":"1.6.0"}}
Nov 23 10:52:50 ip-10-0-9-223 kibana: [2022-11-23T10:52:50.603+00:00][WARN ][plugins.licensing] License information could not be obtained from Elasticsearch due to ConnectionError: connect ECONNREFUSED 10.0.9.223:9200 error
Nov 23 10:52:54 ip-10-0-9-223 logstash: [2022-11-23T10:52:54,504][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused>}
Nov 23 10:52:54 ip-10-0-9-223 logstash: [2022-11-23T10:52:54,505][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elastic:xxxxxx@localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused"}
Nov 23 10:52:58 ip-10-0-9-223 logstash: [2022-11-23T10:52:58,931][ERROR][logstash.outputs.elasticsearch][main][7f4ba291cce5f998e906f3936f25e9ca4c329e0dbc0b7c39be6038e0bcae90d8] Attempted to send a bulk request but there are no living connections in the pool (perhaps Elasticsearch is unreachable or down?) {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError, :will_retry_in_seconds=>64}
Nov 23 10:52:59 ip-10-0-9-223 logstash: [2022-11-23T10:52:59,508][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused>}
Nov 23 10:52:59 ip-10-0-9-223 logstash: [2022-11-23T10:52:59,508][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elastic:xxxxxx@localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/127.0.0.1] failed: Connection refused"}

if i restart the service i can curl to 9200 port but the service fails again

Without knowing how much data you are ingesting it is hard to say....

Typically you don't run everything on a single host... It seems like you don't have enough resources.

My first suggestion would actually be put logstash in elasticsearch on two different VMs.

If you want to run on a single house, I might try to make it 16 GB and then make Elasticsearch and log stash. 4 GB each but really putting them on separate hosts is probably your best idea if you wants stability

Or if I'm not doing any formatting on the logs from the machines so if i send logs directly from filebeat to elasticsearch and skip logstash. Will that help?

Yes Probably!

Ok will try that! so what should i set jvm options for elasticsearch in that case?

I am getting this error when i update jvm options to 4gb for elasticsearch

 Unit elasticsearch.service entered failed state.
Nov 24 05:14:14 ip-10-0-9-223.ap-south-1.compute.internal systemd[1]: elasticsearch.service failed.
Nov 24 05:14:14 ip-10-0-9-223.ap-south-1.compute.internal kibana[1096]: [2022-11-24T05:14:14.970+00:00][ERROR][plugins.security.authentication] License is not availabl
Nov 24 05:14:14 ip-10-0-9-223.ap-south-1.compute.internal kibana[1096]: [2022-11-24T05:14:14.977+00:00][WARN ][plugins.licensing] License information could not be obta
Nov 24 05:14:18 ip-10-0-9-223.ap-south-1.compute.internal filebeat[1094]: {"log.level":"info","@timestamp":"2022-11-24T05:14:18.557Z","log.logger":"input.harvester","l
Nov 24 05:14:19 ip-10-0-9-223.ap-south-1.compute.internal logstash[1709]: [2022-11-24T05:14:19,012][INFO ][logstash.outputs.elasticsearch][main] Failed to perform requ
Nov 24 05:14:19 ip-10-0-9-223.ap-south-1.compute.internal logstash[1709]: [2022-11-24T05:14:19,013][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect
Nov 24 05:14:24 ip-10-0-9-223.ap-south-1.compute.internal logstash[1709]: [2022-11-24T05:14:24,019][INFO ][logstash.outputs.elasticsearch][main] Failed to perform requ
Nov 24 05:14:24 ip-10-0-9-223.ap-south-1.compute.internal logstash[1709]: [2022-11-24T05:14:24,019][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect
Nov 24 05:14:29 ip-10-0-9-223.ap-south-1.compute.internal logstash[1709]: [2022-11-24T05:14:29,025][INFO ][logstash.outputs.elasticsearch][main] Failed to perform requ
Nov 24 05:14:29 ip-10-0-9-223.ap-south-1.compute.internal logstash[1709]: [2022-11-24T05:14:29,025][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect
Nov 24 05:14:34 ip-10-0-9-223.ap-south-1.compute.internal logstash[1709]: [2022-11-24T05:14:34,033][INFO ][logstash.outputs.elasticsearch][main] Failed to perform requ
Nov 24 05:14:34 ip-10-0-9-223.ap-south-1.compute.internal logstash[1709]: [2022-11-24T05:14:34,033][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect
Nov 24 05:14:38 ip-10-0-9-223.ap-south-1.compute.internal filebeat[1094]: {"log.level":"info","@timestamp":"2022-11-24T05:14:38.367Z","log.logger":"monitoring","log.or
Nov 24 05:14:38 ip-10-0-9-223.ap-south-1.compute.internal kibana[1096]: [2022-11-24T05:14:38.940+00:00][WARN ][plugins.licensing] License information could not be obta
Nov 24 05:14:38 ip-10-0-9-223.ap-south-1.compute.internal kibana[1096]: [2022-11-24T05:14:38.958+00:00][WARN ][plugins.licensing] License information could not be obta
Nov 24 05:14:39 ip-10-0-9-223.ap-south-1.compute.internal logstash[1709]: [2022-11-24T05:14:39,039][INFO ][logstash.outputs.elasticsearch][main] Failed to perform requ
Nov 24 05:14:39 ip-10-0-9-223.ap-south-1.compute.internal logstash[1709]: [2022-11-24T05:14:39,040][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect
Nov 24 05:14:39 ip-10-0-9-223.ap-south-1.compute.internal dhclient[900]: XMT: Solicit on eth0, interval 123550ms.