Failed to download geoip databases

Hi,

I was getting the Geo coordinates and IP countries from my Nginx logs via elastic agents since long time. Now I can see this part is not available to new ingested logs (from same services) and the log field has this part instead of the Geo details:

tags	[nginx-access, _geoip_database_unavailable_GeoLite2-City.mmdb, _geoip_database_unavailable_GeoLite2-ASN.mmdb]

Further checking as this doc says:

Ive run GET _ingest/geoip/stats and the result was showing databases expired:

{
  "stats": {
    "successful_downloads": 0,
    "failed_downloads": 0,
    "total_download_time": 0,
    "databases_count": 0,
    "skipped_updates": 0,
    "expired_databases": 3
  },
  "nodes": {}
}

After running :

PUT _cluster/settings
{
  "transient": {
    "ingest.geoip.downloader.enabled": false
  }
}

and then:

PUT _cluster/settings
{
    "transient": {
        "ingest.geoip.downloader.enabled" : true
    }
}

Results changed to this:

{
  "stats": {
    "successful_downloads": 0,
    "failed_downloads": 3,
    "total_download_time": 0,
    "databases_count": 0,
    "skipped_updates": 0,
    "expired_databases": 0
  },
  "nodes": {}
}

Im able to ping geoip.elastic.co and my elk is connected to the internet.

elk@elk:~$ ping geoip.elastic.co
PING geoip.elastic.co (34.72.239.183) 56(84) bytes of data.
64 bytes from 183.239.72.34.bc.googleusercontent.com (34.72.239.183): icmp_seq=1 ttl=53 time=163 ms
64 bytes from 183.239.72.34.bc.googleusercontent.com (34.72.239.183): icmp_seq=2 ttl=53 time=178 ms
64 bytes from 183.239.72.34.bc.googleusercontent.com (34.72.239.183): icmp_seq=3 ttl=53 time=175 ms
64 bytes from 183.239.72.34.bc.googleusercontent.com (34.72.239.183): icmp_seq=4 ttl=53 time=194 ms
^C
--- geoip.elastic.co ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 162.894/177.339/193.801/11.032 ms

Ive updated the whole stack to the latest version 8.17.0 including the agents and the nginx integrartions.

I've added the ingest.geoip.downloader.eager.download: true in elasticsearch.yml (I dont know if this must be added) then restarted elasticsearch with no luck so far.

Can anybody help in solving this?

Thanks in advance.

Hi @ethical20

Did you look in the elasticsearch logs while doing your steps?

One idea is that elasticsearch uses the/tmp Directory to download and prep the Geo IP files. Perhaps your tmp directory is full or is not writable to all.

The 2nd

By default, the processor uses the GeoLite2 City, GeoLite2 Country, and GeoLite2 ASN IP geolocation databases from MaxMind, shared under the CC BY-SA 4.0 license. It automatically downloads these databases if your nodes can connect to storage.googleapis.com domain and either:

So you also need to test access to that.

Check those things and check your logs...

I had exactly same issue 2 days ago, and just as I was getting annoyed it just fixed itself after a restart and a couple times flipping ingest.geoip.downloader.enabled to false then back to true. Latest elasticsearch version, 8.17.26, free version.

I got as far as checking if it was trying to even resolve storage.googleapis.com / geoip.elastic.co via Wireshark, and of course by time I was doing this I saw the dns resolution calls on the wire.

o.e.i.g.DatabaseNodeService
o.e.i.g.DatabaseReaderLazyLoader
o.e.i.g.GeoIpDownloader

are the log tags you need check in elasticsearch.log

2 Likes

Thanks all for getting back, here are the elasticseach logs and i can see the below related:

[2024-12-20T13:31:17,352][ERROR][o.e.x.s.a.e.ReservedRealm] [elk-1] failed to retrieve password hash for reserved user [kibana_system]
[2024-12-20T13:31:17,520][ERROR][o.e.x.s.a.e.ReservedRealm] [elk-1] failed to retrieve password hash for reserved user [kibana_system]
[2024-12-20T13:31:20,430][ERROR][o.e.i.g.GeoIpDownloader  ] [elk-1] error downloading geoip database [GeoLite2-ASN.mmdb]
[2024-12-20T13:31:21,646][ERROR][o.e.i.g.GeoIpDownloader  ] [elk-1] error downloading geoip database [GeoLite2-City.mmdb]
[2024-12-20T13:31:22,828][ERROR][o.e.i.g.GeoIpDownloader  ] [elk-1] error downloading geoip database [GeoLite2-Country.mmdb]

Also yes I can ping the storage.googleapis.com

elk@elk:/$ ping storage.googleapis.com
PING storage.googleapis.com (216.58.211.219) 56(84) bytes of data.
64 bytes from mad01s25-in-f219.1e100.net (216.58.211.219): icmp_seq=1 ttl=113 time=68.4 ms
64 bytes from mad01s25-in-f219.1e100.net (216.58.211.219): icmp_seq=2 ttl=113 time=75.5 ms
64 bytes from mad01s25-in-f219.1e100.net (216.58.211.219): icmp_seq=3 ttl=113 time=65.8 ms
^C
--- storage.googleapis.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 65.800/69.893/75.452/4.074 ms

Any ideas ?

You got as far as I got.

I was going to simply snoop the traffic to all the IPs for
storage.googleapis.com / geoip.elastic.co, but problem solved itself first. Even if I'd captured the network data, likely would not have helped me much as it'll all be encrypted, but at least I could see if/when it tried. Maybe just some kind of rate limiting going on - the logging detail for "error downloading geoip database" is not great.

@ethical20

Turn up the logging... and you will see much more details...
Last person I had in this state there were bad perms on /tmp

# turn on geoip logging 
PUT _cluster/settings
{
  "transient": {
    "logger.org.elasticsearch.ingest.geoip" : "TRACE"
  }
}

# turn off geoip logging 
PUT _cluster/settings
{
  "transient": {
    "logger.org.elasticsearch.ingest.geoip" : null
  }
}

Like where it is putting the /tmp files etc...

{"@timestamp":"2024-12-20T17:13:56.677Z", "log.level":"DEBUG", "message":"starting reload of changed geoip database file [/tmp/elasticsearch-12191989252083075067/geoip-databases/3ujooodLTIK1nCbkPrseQQ/GeoLite2-City.mmdb]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[32429bf173cc][generic][T#3]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","trace.id":"8ad2a8116e0faa22d949a75b073f003a","elasticsearch.cluster.uuid":"O2IC1_Q9RVqDXXcai2NePg","elasticsearch.node.id":"3ujooodLTIK1nCbkPrseQQ","elasticsearch.node.name":"32429bf173cc","elasticsearch.cluster.name":"docker-cluster"}

@stephenb This is regarding the trace:

[2024-12-21T09:40:33,649][TRACE][o.e.i.g.DatabaseNodeService] [elk-1] Not checking databases because geoip databases index does not exist
[2024-12-21T11:23:47,136][TRACE][o.e.i.g.DatabaseNodeService] [elk-1] Not checking databases because geoip databases index does not exist
[2024-12-21T11:35:44,663][TRACE][o.e.i.g.DatabaseNodeService] [elk-1] Not checking databases because geoip databases index does not exist
[2024-12-21T11:38:16,760][TRACE][o.e.i.g.DatabaseNodeService] [elk-1] Not checking databases because geoip databases index does not exist
[2024-12-21T11:38:18,018][TRACE][o.e.i.g.DatabaseNodeService] [elk-1] Not checking databases because geoip databases index does not exist
[2024-12-21T11:38:18,116][TRACE][o.e.i.g.DatabaseNodeService] [elk-1] Not checking databases because geoip databases index does not exist
[2024-12-21T11:38:18,188][TRACE][o.e.i.g.DatabaseNodeService] [elk-1] Not checking databases because geoip databases index does not exist

I don't think it is tmp/ permissions as this is what im getting :

elk@elk:~$ ls -ld /tmp
drwxrwxrwt 15 root root 4096 Dec 21 11:27 /tmp
elk@elk:~$ ls -la /tmp/
total 60
drwxrwxrwt 15 root root 4096 Dec 21 11:27 .
drwxr-xr-x 19 root root 4096 May  2  2023 ..
drwxrwxrwt  2 root root 4096 Dec 20 11:55 .font-unix
drwxrwxrwt  2 root root 4096 Dec 20 11:55 .ICE-unix
drwx------  3 root root 4096 Dec 20 11:55 snap-private-tmp
drwx------  3 root root 4096 Dec 21 11:27 systemd-private-e974d400af0541b4b9ca1035baf91433-elasticsearch.service-gpVH19
drwx------  3 root root 4096 Dec 20 11:55 systemd-private-e974d400af0541b4b9ca1035baf91433-ModemManager.service-Ktrrt2
drwx------  3 root root 4096 Dec 20 11:55 systemd-private-e974d400af0541b4b9ca1035baf91433-systemd-logind.service-grMn5q
drwx------  3 root root 4096 Dec 20 11:55 systemd-private-e974d400af0541b4b9ca1035baf91433-systemd-resolved.service-VkWdkt
drwx------  3 root root 4096 Dec 20 11:55 systemd-private-e974d400af0541b4b9ca1035baf91433-systemd-timesyncd.service-Ff4fxt
drwx------  3 root root 4096 Dec 21 03:52 systemd-private-e974d400af0541b4b9ca1035baf91433-upower.service-Z8VGMc
drwxrwxrwt  2 root root 4096 Dec 20 11:55 .Test-unix
drwx------  2 elk  elk  4096 Dec 20 12:18 tmux-1000
drwxrwxrwt  2 root root 4096 Dec 20 11:55 .X11-unix
drwxrwxrwt  2 root root 4096 Dec 20 11:55 .XIM-unix

Any ideas?

Apologize, I did not mean just show the TRACE There should have been many many DEBUG messages as well near where the ERROR was .. need those...

If possible provide the entire startup log You picking and choosing the logs just makes it harder to figure out.

Looks like the tmp should be okay but also shows that nothing's getting downloaded

Also add this ..

PUT _cluster/settings
{
    "transient": {
 "ingest.geoip.downloader.eager.download" : true
    }
}

Then disable and renable like in your first post

And share the logs

@stephenb again thanks for the follow up,

Here are sudo tail -n 500 -f /var/log/elasticsearch/elasticsearch.log

[2024-12-23T12:13:33,701][WARN ][r.suppressed             ] [elk-1] path: /auditbeat-*%2Cwinlogbeat-*%2Clogs-endpoint.events.*%2Clogs-windows.sysmon_operational-*/_eql/search, params: {allow_no_indices=true, index=auditbeat-*,winlogbeat-*,logs-endpoint.events.*,logs-windows.sysmon_operational-*}, status: 503
org.elasticsearch.action.search.SearchPhaseExecutionException: start
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.onPhaseFailure(CanMatchPreFilterSearchPhase.java:422) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase$1.onFailure(CanMatchPreFilterSearchPhase.java:411) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:29) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:34) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1575) ~[?:?]
Caused by: org.elasticsearch.action.search.SearchPhaseExecutionException: Search rejected due to missing shards [[.ds-logs-windows.sysmon_operational-default-2023.05.30-000001][0], [.ds-logs-windows.sysmon_operational-default-2023.06.29-000002][0]]. Consider using `allow_partial_search_results` setting to bypass this error.
        at org.elasticsearch.action.search.SearchPhase.doCheckNoMissingShards(SearchPhase.java:69) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.checkNoMissingShards(CanMatchPreFilterSearchPhase.java:202) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.runCoordinatorRewritePhase(CanMatchPreFilterSearchPhase.java:189) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.run(CanMatchPreFilterSearchPhase.java:144) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase$1.doRun(CanMatchPreFilterSearchPhase.java:416) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        ... 6 more
[2024-12-23T12:13:33,719][WARN ][o.e.x.e.p.RestEqlSearchAction] [elk-1] Request failed with status [SERVICE_UNAVAILABLE]:
org.elasticsearch.action.search.SearchPhaseExecutionException: start
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.onPhaseFailure(CanMatchPreFilterSearchPhase.java:422) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase$1.onFailure(CanMatchPreFilterSearchPhase.java:411) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:29) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:34) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1575) ~[?:?]
Caused by: org.elasticsearch.action.search.SearchPhaseExecutionException: Search rejected due to missing shards [[.ds-logs-windows.sysmon_operational-default-2023.05.30-000001][0], [.ds-logs-windows.sysmon_operational-default-2023.06.29-000002][0]]. Consider using `allow_partial_search_results` setting to bypass this error.
        at org.elasticsearch.action.search.SearchPhase.doCheckNoMissingShards(SearchPhase.java:69) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.checkNoMissingShards(CanMatchPreFilterSearchPhase.java:202) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.runCoordinatorRewritePhase(CanMatchPreFilterSearchPhase.java:189) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.run(CanMatchPreFilterSearchPhase.java:144) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase$1.doRun(CanMatchPreFilterSearchPhase.java:416) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        ... 6 more
[2024-12-23T12:13:33,720][WARN ][r.suppressed             ] [elk-1] path: /auditbeat-*%2Cwinlogbeat-*%2Clogs-endpoint.events.*%2Clogs-windows.sysmon_operational-*/_eql/search, params: {allow_no_indices=true, index=auditbeat-*,winlogbeat-*,logs-endpoint.events.*,logs-windows.sysmon_operational-*}, status: 503
org.elasticsearch.action.search.SearchPhaseExecutionException: start
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.onPhaseFailure(CanMatchPreFilterSearchPhase.java:422) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase$1.onFailure(CanMatchPreFilterSearchPhase.java:411) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:29) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:34) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1575) ~[?:?]
Caused by: org.elasticsearch.action.search.SearchPhaseExecutionException: Search rejected due to missing shards [[.ds-logs-windows.sysmon_operational-default-2023.05.30-000001][0], [.ds-logs-windows.sysmon_operational-default-2023.06.29-000002][0]]. Consider using `allow_partial_search_results` setting to bypass this error.
        at org.elasticsearch.action.search.SearchPhase.doCheckNoMissingShards(SearchPhase.java:69) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.checkNoMissingShards(CanMatchPreFilterSearchPhase.java:202) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.runCoordinatorRewritePhase(CanMatchPreFilterSearchPhase.java:189) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.run(CanMatchPreFilterSearchPhase.java:144) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase$1.doRun(CanMatchPreFilterSearchPhase.java:416) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        ... 6 more
[2024-12-23T12:13:33,744][WARN ][o.e.x.e.p.RestEqlSearchAction] [elk-1] Request failed with status [SERVICE_UNAVAILABLE]:
org.elasticsearch.action.search.SearchPhaseExecutionException: start
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.onPhaseFailure(CanMatchPreFilterSearchPhase.java:422) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase$1.onFailure(CanMatchPreFilterSearchPhase.java:411) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:29) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:34) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1575) ~[?:?]
Caused by: org.elasticsearch.action.search.SearchPhaseExecutionException: Search rejected due to missing shards [[.ds-logs-windows.sysmon_operational-default-2023.05.30-000001][0], [.ds-logs-windows.sysmon_operational-default-2023.06.29-000002][0]]. Consider using `allow_partial_search_results` setting to bypass this error.
        at org.elasticsearch.action.search.SearchPhase.doCheckNoMissingShards(SearchPhase.java:69) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.checkNoMissingShards(CanMatchPreFilterSearchPhase.java:202) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.runCoordinatorRewritePhase(CanMatchPreFilterSearchPhase.java:189) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.run(CanMatchPreFilterSearchPhase.java:144) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase$1.doRun(CanMatchPreFilterSearchPhase.java:416) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        ... 6 more
[2024-12-23T12:13:33,746][WARN ][r.suppressed             ] [elk-1] path: /auditbeat-*%2Cwinlogbeat-*%2Clogs-endpoint.events.*%2Clogs-windows.sysmon_operational-*/_eql/search, params: {allow_no_indices=true, index=auditbeat-*,winlogbeat-*,logs-endpoint.events.*,logs-windows.sysmon_operational-*}, status: 503
org.elasticsearch.action.search.SearchPhaseExecutionException: start
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.onPhaseFailure(CanMatchPreFilterSearchPhase.java:422) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase$1.onFailure(CanMatchPreFilterSearchPhase.java:411) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:29) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:34) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1575) ~[?:?]
Caused by: org.elasticsearch.action.search.SearchPhaseExecutionException: Search rejected due to missing shards [[.ds-logs-windows.sysmon_operational-default-2023.05.30-000001][0], [.ds-logs-windows.sysmon_operational-default-2023.06.29-000002][0]]. Consider using `allow_partial_search_results` setting to bypass this error.
        at org.elasticsearch.action.search.SearchPhase.doCheckNoMissingShards(SearchPhase.java:69) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.checkNoMissingShards(CanMatchPreFilterSearchPhase.java:202) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.runCoordinatorRewritePhase(CanMatchPreFilterSearchPhase.java:189) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.run(CanMatchPreFilterSearchPhase.java:144) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase$1.doRun(CanMatchPreFilterSearchPhase.java:416) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        ... 6 more
[2024-12-23T12:13:51,028][INFO ][o.e.c.r.a.AllocationService] [elk-1] current.health="YELLOW" message="Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.transform-notifications-000002][0], [.lists-default-000001][0]]])." previous.health="RED" reason="shards started [[.transform-notifications-000002][0], [.lists-default-000001][0]]"
[2024-12-23T12:17:54,602][ERROR][o.e.d.l.DataStreamLifecycleService] [elk-1] Data stream lifecycle encountered an error trying to roll over data stream [ilm-history-7]
org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [1] shards, but this cluster currently has [1000]/[1000] maximum normal shards open; for more information, see https://www.elastic.co/guide/en/elasticsearch/reference/8.17/size-your-shards.html#troubleshooting-max-shards-open;
        at org.elasticsearch.indices.ShardLimitValidator.validateShardLimit(ShardLimitValidator.java:117) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.aggregateIndexSettings(MetadataCreateIndexService.java:1127) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequestWithV2Template(MetadataCreateIndexService.java:678) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:421) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.MetadataRolloverService.rolloverDataStream(MetadataRolloverService.java:405) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.MetadataRolloverService.rolloverClusterState(MetadataRolloverService.java:164) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction$RolloverExecutor.executeTask(TransportRolloverAction.java:542) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction$RolloverExecutor.execute(TransportRolloverAction.java:462) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.innerExecuteTasks(MasterService.java:1075) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:1038) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeAndPublishBatch(MasterService.java:245) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.lambda$run$2(MasterService.java:1691) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.run(MasterService.java:1688) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.lambda$doRun$0(MasterService.java:1283) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.doRun(MasterService.java:1262) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1575) ~[?:?]
[2024-12-23T12:22:54,605][INFO ][o.e.x.i.IndexLifecycleRunner] [elk-1] policy [.fleet-actions-results-ilm-policy] for index [.ds-.fleet-actions-results-2024.05.29-000012] on an error step due to a transient error, moving back to the failed step [attempt-rollover] for execution. retry attempt [557]
[2024-12-23T12:22:54,610][INFO ][o.e.x.i.IndexLifecycleRunner] [elk-1] policy [metrics] for index [.ds-metrics-elastic_agent.filebeat_input-default-2024.09.17-000011] on an error step due to a transient error, moving back to the failed step [attempt-rollover] for execution. retry attempt [558]
[2024-12-23T12:22:54,927][ERROR][o.e.x.i.IndexLifecycleRunner] [elk-1] policy [.fleet-actions-results-ilm-policy] for index [.ds-.fleet-actions-results-2024.05.29-000012] failed on step [{"phase":"hot","action":"rollover","name":"attempt-rollover"}]. Moving to ERROR step
org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] shards, but this cluster currently has [1000]/[1000] maximum normal shards open; for more information, see https://www.elastic.co/guide/en/elasticsearch/reference/8.17/size-your-shards.html#troubleshooting-max-shards-open;
        at org.elasticsearch.indices.ShardLimitValidator.validateShardLimit(ShardLimitValidator.java:117) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.aggregateIndexSettings(MetadataCreateIndexService.java:1127) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequestForSystemDataStream(MetadataCreateIndexService.java:785) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:394) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.MetadataRolloverService.rolloverDataStream(MetadataRolloverService.java:405) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.MetadataRolloverService.rolloverClusterState(MetadataRolloverService.java:164) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction$RolloverExecutor.executeTask(TransportRolloverAction.java:542) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction$RolloverExecutor.execute(TransportRolloverAction.java:462) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.innerExecuteTasks(MasterService.java:1075) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:1038) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeAndPublishBatch(MasterService.java:245) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.lambda$run$2(MasterService.java:1691) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.run(MasterService.java:1688) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.lambda$doRun$0(MasterService.java:1283) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.doRun(MasterService.java:1262) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1575) ~[?:?]
[2024-12-23T12:22:54,990][ERROR][o.e.x.i.IndexLifecycleRunner] [elk-1] policy [metrics] for index [.ds-metrics-elastic_agent.filebeat_input-default-2024.09.17-000011] failed on step [{"phase":"hot","action":"rollover","name":"attempt-rollover"}]. Moving to ERROR step
org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] shards, but this cluster currently has [1000]/[1000] maximum normal shards open; for more information, see https://www.elastic.co/guide/en/elasticsearch/reference/8.17/size-your-shards.html#troubleshooting-max-shards-open;
        at org.elasticsearch.indices.ShardLimitValidator.validateShardLimit(ShardLimitValidator.java:117) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.aggregateIndexSettings(MetadataCreateIndexService.java:1127) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequestWithV2Template(MetadataCreateIndexService.java:678) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:421) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.MetadataRolloverService.rolloverDataStream(MetadataRolloverService.java:405) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.MetadataRolloverService.rolloverClusterState(MetadataRolloverService.java:164) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction$RolloverExecutor.executeTask(TransportRolloverAction.java:542) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction$RolloverExecutor.execute(TransportRolloverAction.java:462) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.innerExecuteTasks(MasterService.java:1075) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:1038) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeAndPublishBatch(MasterService.java:245) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.lambda$run$2(MasterService.java:1691) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.run(MasterService.java:1688) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.lambda$doRun$0(MasterService.java:1283) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.doRun(MasterService.java:1262) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1575) ~[?:?]
[2024-12-23T12:32:54,604][INFO ][o.e.x.i.IndexLifecycleRunner] [elk-1] policy [.fleet-actions-results-ilm-policy] for index [.ds-.fleet-actions-results-2024.05.29-000012] on an error step due to a transient error, moving back to the failed step [attempt-rollover] for execution. retry attempt [558]
[2024-12-23T12:32:54,607][INFO ][o.e.x.i.IndexLifecycleRunner] [elk-1] policy [metrics] for index [.ds-metrics-elastic_agent.filebeat_input-default-2024.09.17-000011] on an error step due to a transient error, moving back to the failed step [attempt-rollover] for execution. retry attempt [559]
[2024-12-23T12:32:54,710][ERROR][o.e.x.i.IndexLifecycleRunner] [elk-1] policy [.fleet-actions-results-ilm-policy] for index [.ds-.fleet-actions-results-2024.05.29-000012] failed on step [{"phase":"hot","action":"rollover","name":"attempt-rollover"}]. Moving to ERROR step
org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] shards, but this cluster currently has [1000]/[1000] maximum normal shards open; for more information, see https://www.elastic.co/guide/en/elasticsearch/reference/8.17/size-your-shards.html#troubleshooting-max-shards-open;
        at org.elasticsearch.indices.ShardLimitValidator.validateShardLimit(ShardLimitValidator.java:117) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.aggregateIndexSettings(MetadataCreateIndexService.java:1127) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequestForSystemDataStream(MetadataCreateIndexService.java:785) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:394) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.MetadataRolloverService.rolloverDataStream(MetadataRolloverService.java:405) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.MetadataRolloverService.rolloverClusterState(MetadataRolloverService.java:164) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction$RolloverExecutor.executeTask(TransportRolloverAction.java:542) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction$RolloverExecutor.execute(TransportRolloverAction.java:462) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.innerExecuteTasks(MasterService.java:1075) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:1038) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeAndPublishBatch(MasterService.java:245) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.lambda$run$2(MasterService.java:1691) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.run(MasterService.java:1688) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.lambda$doRun$0(MasterService.java:1283) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.doRun(MasterService.java:1262) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1575) ~[?:?]
[2024-12-23T12:32:54,714][ERROR][o.e.x.i.IndexLifecycleRunner] [elk-1] policy [metrics] for index [.ds-metrics-elastic_agent.filebeat_input-default-2024.09.17-000011] failed on step [{"phase":"hot","action":"rollover","name":"attempt-rollover"}]. Moving to ERROR step
org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] shards, but this cluster currently has [1000]/[1000] maximum normal shards open; for more information, see https://www.elastic.co/guide/en/elasticsearch/reference/8.17/size-your-shards.html#troubleshooting-max-shards-open;
        at org.elasticsearch.indices.ShardLimitValidator.validateShardLimit(ShardLimitValidator.java:117) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.aggregateIndexSettings(MetadataCreateIndexService.java:1127) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequestWithV2Template(MetadataCreateIndexService.java:678) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:421) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.MetadataRolloverService.rolloverDataStream(MetadataRolloverService.java:405) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.MetadataRolloverService.rolloverClusterState(MetadataRolloverService.java:164) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction$RolloverExecutor.executeTask(TransportRolloverAction.java:542) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction$RolloverExecutor.execute(TransportRolloverAction.java:462) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.innerExecuteTasks(MasterService.java:1075) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:1038) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeAndPublishBatch(MasterService.java:245) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.lambda$run$2(MasterService.java:1691) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.run(MasterService.java:1688) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.lambda$doRun$0(MasterService.java:1283) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.doRun(MasterService.java:1262) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1575) ~[?:?]
[2024-12-23T12:34:08,409][INFO ][o.e.c.s.ClusterSettings  ] [elk-1] updating [ingest.geoip.downloader.eager.download] from [false] to [true]
[2024-12-23T12:34:10,333][ERROR][o.e.i.g.GeoIpDownloader  ] [elk-1] error downloading geoip database [GeoLite2-ASN.mmdb]
org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] shards, but this cluster currently has [1000]/[1000] maximum normal shards open; for more information, see https://www.elastic.co/guide/en/elasticsearch/reference/8.17/size-your-shards.html#troubleshooting-max-shards-open;
        at org.elasticsearch.indices.ShardLimitValidator.validateShardLimit(ShardLimitValidator.java:117) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.aggregateIndexSettings(MetadataCreateIndexService.java:1127) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequestForSystemIndex(MetadataCreateIndexService.java:726) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:401) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:466) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.create.AutoCreateAction$TransportAction$CreateIndexTask.execute(AutoCreateAction.java:339) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.create.AutoCreateAction$TransportAction.lambda$new$0(AutoCreateAction.java:121) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.innerExecuteTasks(MasterService.java:1075) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:1038) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeAndPublishBatch(MasterService.java:245) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.lambda$run$2(MasterService.java:1691) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.run(MasterService.java:1688) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.lambda$doRun$0(MasterService.java:1283) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.doRun(MasterService.java:1262) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1575) ~[?:?]
[2024-12-23T12:34:11,611][ERROR][o.e.i.g.GeoIpDownloader  ] [elk-1] error downloading geoip database [GeoLite2-City.mmdb]
org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] shards, but this cluster currently has [1000]/[1000] maximum normal shards open; for more information, see https://www.elastic.co/guide/en/elasticsearch/reference/8.17/size-your-shards.html#troubleshooting-max-shards-open;
        at org.elasticsearch.indices.ShardLimitValidator.validateShardLimit(ShardLimitValidator.java:117) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.aggregateIndexSettings(MetadataCreateIndexService.java:1127) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequestForSystemIndex(MetadataCreateIndexService.java:726) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:401) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:466) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.create.AutoCreateAction$TransportAction$CreateIndexTask.execute(AutoCreateAction.java:339) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.create.AutoCreateAction$TransportAction.lambda$new$0(AutoCreateAction.java:121) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.innerExecuteTasks(MasterService.java:1075) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:1038) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeAndPublishBatch(MasterService.java:245) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.lambda$run$2(MasterService.java:1691) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.run(MasterService.java:1688) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.lambda$doRun$0(MasterService.java:1283) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.doRun(MasterService.java:1262) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1575) ~[?:?]
[2024-12-23T12:34:12,794][ERROR][o.e.i.g.GeoIpDownloader  ] [elk-1] error downloading geoip database [GeoLite2-Country.mmdb]
org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] shards, but this cluster currently has [1000]/[1000] maximum normal shards open; for more information, see https://www.elastic.co/guide/en/elasticsearch/reference/8.17/size-your-shards.html#troubleshooting-max-shards-open;
        at org.elasticsearch.indices.ShardLimitValidator.validateShardLimit(ShardLimitValidator.java:117) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.aggregateIndexSettings(MetadataCreateIndexService.java:1127) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequestForSystemIndex(MetadataCreateIndexService.java:726) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:401) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:466) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.create.AutoCreateAction$TransportAction$CreateIndexTask.execute(AutoCreateAction.java:339) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.create.AutoCreateAction$TransportAction.lambda$new$0(AutoCreateAction.java:121) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.innerExecuteTasks(MasterService.java:1075) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:1038) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeAndPublishBatch(MasterService.java:245) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.lambda$run$2(MasterService.java:1691) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.run(MasterService.java:1688) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.lambda$doRun$0(MasterService.java:1283) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.doRun(MasterService.java:1262) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1575) ~[?:?]
[2024-12-23T12:36:35,672][INFO ][o.e.c.s.ClusterSettings  ] [elk-1] updating [ingest.geoip.downloader.enabled] from [true] to [false]
[2024-12-23T12:36:41,909][INFO ][o.e.c.s.ClusterSettings  ] [elk-1] updating [ingest.geoip.downloader.enabled] from [false] to [true]
[2024-12-23T12:36:43,845][ERROR][o.e.i.g.GeoIpDownloader  ] [elk-1] error downloading geoip database [GeoLite2-ASN.mmdb]
org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] shards, but this cluster currently has [1000]/[1000] maximum normal shards open; for more information, see https://www.elastic.co/guide/en/elasticsearch/reference/8.17/size-your-shards.html#troubleshooting-max-shards-open;
        at org.elasticsearch.indices.ShardLimitValidator.validateShardLimit(ShardLimitValidator.java:117) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.aggregateIndexSettings(MetadataCreateIndexService.java:1127) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequestForSystemIndex(MetadataCreateIndexService.java:726) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:401) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:466) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.create.AutoCreateAction$TransportAction$CreateIndexTask.execute(AutoCreateAction.java:339) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.create.AutoCreateAction$TransportAction.lambda$new$0(AutoCreateAction.java:121) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.innerExecuteTasks(MasterService.java:1075) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:1038) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeAndPublishBatch(MasterService.java:245) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.lambda$run$2(MasterService.java:1691) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.run(MasterService.java:1688) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.lambda$doRun$0(MasterService.java:1283) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.doRun(MasterService.java:1262) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1575) ~[?:?]
[2024-12-23T12:36:45,096][ERROR][o.e.i.g.GeoIpDownloader  ] [elk-1] error downloading geoip database [GeoLite2-City.mmdb]
org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] shards, but this cluster currently has [1000]/[1000] maximum normal shards open; for more information, see https://www.elastic.co/guide/en/elasticsearch/reference/8.17/size-your-shards.html#troubleshooting-max-shards-open;
        at org.elasticsearch.indices.ShardLimitValidator.validateShardLimit(ShardLimitValidator.java:117) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.aggregateIndexSettings(MetadataCreateIndexService.java:1127) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequestForSystemIndex(MetadataCreateIndexService.java:726) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:401) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:466) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.create.AutoCreateAction$TransportAction$CreateIndexTask.execute(AutoCreateAction.java:339) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.create.AutoCreateAction$TransportAction.lambda$new$0(AutoCreateAction.java:121) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.innerExecuteTasks(MasterService.java:1075) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:1038) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeAndPublishBatch(MasterService.java:245) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.lambda$run$2(MasterService.java:1691) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.run(MasterService.java:1688) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.lambda$doRun$0(MasterService.java:1283) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.doRun(MasterService.java:1262) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1575) ~[?:?]
[2024-12-23T12:36:46,363][ERROR][o.e.i.g.GeoIpDownloader  ] [elk-1] error downloading geoip database [GeoLite2-Country.mmdb]
org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] shards, but this cluster currently has [1000]/[1000] maximum normal shards open; for more information, see https://www.elastic.co/guide/en/elasticsearch/reference/8.17/size-your-shards.html#troubleshooting-max-shards-open;
        at org.elasticsearch.indices.ShardLimitValidator.validateShardLimit(ShardLimitValidator.java:117) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.aggregateIndexSettings(MetadataCreateIndexService.java:1127) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequestForSystemIndex(MetadataCreateIndexService.java:726) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:401) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:466) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.create.AutoCreateAction$TransportAction$CreateIndexTask.execute(AutoCreateAction.java:339) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.admin.indices.create.AutoCreateAction$TransportAction.lambda$new$0(AutoCreateAction.java:121) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.innerExecuteTasks(MasterService.java:1075) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:1038) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService.executeAndPublishBatch(MasterService.java:245) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.lambda$run$2(MasterService.java:1691) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.run(MasterService.java:1688) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.lambda$doRun$0(MasterService.java:1283) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.action.ActionListener.run(ActionListener.java:452) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.cluster.service.MasterService$5.doRun(MasterService.java:1262) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:1023) ~[elasticsearch-8.17.0.jar:?]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:27) ~[elasticsearch-8.17.0.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.lang.Thread.run(Thread.java:1575) ~[?:?]

Regarding the number of shards I can see this :

GET _cluster/stats?filter_path=indices.shards.total

{
  "indices": {
    "shards": {
      "total": 545
    }
  }
}

So still it didnt reach 1000 .. Any hints from the logs ?

It would be easier/better to just

egrep 'DEBUG|TRACE' elasticsearch.log

and similarly for previous days logs.

btw, you will be able to see the index, when its created, via the _cat API, e.g.

$ curl -s -k -u "${EUSER}":"${EPASS}"  "https://${EHOST}:${EPORT}/_cat/indices/.g*?bytes=b"
green open .geoip_databases 94lOG7YvRCmNp50xApyDLg 1 0 36 0 35841617 35841617 35841617

It did not show up in the index management section in kibana, even when "hidden" indices are to be shown.

But in your log:

"Search rejected due to missing shards"

does not look great.

Also from log

this action would add [2] shards, but this cluster currently has [1000]/[1000] maximum normal shards open; for more information

is, er, possibly important :slight_smile:

Hi @ethical20

You have more issues than just the geoip... Event

You are missing shards on other indices...

Search rejected due to missing shards [[.ds-logs-windows.sysmon_operational-default-2023.05.30-000001][0], ....

You need to diagnose and fix those first ...

Read this and follow the suggestions

And yes... This is an issue...

[2024-12-23T12:34:12,794][ERROR][o.e.i.g.GeoIpDownloader  ] [elk-1] error downloading geoip database [GeoLite2-Country.mmdb]
org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] shards, but this cluster currently has [1000]/[1000] maximum normal shards open; for more information, see https://www.elastic.co/guide/en/elasticsearch/reference/8.17/size-your-shards.html#troubleshooting-max-shards-open;

Those missing shards may push you over the 1000 shards

Run this and show the results...

GET _cluster/health

Read the docs above

Thanks @stephenb here are the results of GET _cluster/health

{
  "cluster_name": "elasticsearch",
  "status": "yellow",
  "timed_out": false,
  "number_of_nodes": 1,
  "number_of_data_nodes": 1,
  "active_primary_shards": 544,
  "active_shards": 544,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 455,
  "unassigned_primary_shards": 0,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 54.454454454454456
}

As you can see the number of shards is 544 / 1000 so why elastic is giving errors on this:

org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] shards, but this cluster currently has [999]/[1000] maximum normal shards open;

Also i can ensure that disk space is still ok after running:
GET _cat/allocation?v=true&h=node,shards,disk.*

node       shards disk.indices.forecast disk.indices disk.used disk.avail disk.total disk.percent
elk-1         544               148.5gb       94.4gb   189.2gb      1.7tb      1.9tb            9
UNASSIGNED    455      

I might solve the problem by adding number fo shards to 1200 as listed here but still this doesn't show the logical reason behind this:

PUT _cluster/settings
{
  "persistent" : {
    "cluster.max_shards_per_node": 1200
  }
}

Any ideas?

UPDATE:
For a reason i can now see that each shard is duplicated and yes it reached 1000 as they are duplicated.

Example:

.ds-metrics-elastic_agent.filebeat_input-default-2024.05.23-000008 0     r      UNASSIGNED       CLUSTER_RECOVERED
.ds-logs-system.auth-default-2024.09.26-000014                     0     r      UNASSIGNED       CLUSTER_RECOVERED
.ds-metrics-elastic_agent.elastic_agent-default-2023.05.23-000001  0     r      UNASSIGNED       CLUSTER_RECOVERED

and

.ds-metrics-elastic_agent.filebeat_input-default-2024.05.23-000008 0     p      STARTED    elk-1 
.ds-logs-system.auth-default-2024.09.26-000014                     0     p      STARTED    elk-1 
.ds-metrics-elastic_agent.elastic_agent-default-2023.05.23-000001  0     p      STARTED    elk-1 

I think i should DELETE all the shards with STATE UNASSIGNED right ?

I'd be a slightly worried that you don't seem to have understood why you have UNASSIGNED shards?

Anyways, I suggest you look at number_of_replicas setting, per-index.

  "number_of_nodes": 1,
  "number_of_data_nodes": 1,

says you have a one node cluster (sic), which is fine for trying things out, but then you dont have another node for any replicas. The "r" in some of the output above is for these replica shards, which have nowhere to go. If you are using an index template, you should update that too.

PUT /your_index/_settings
{
  "settings": {
    "number_of_replicas": 0
  }
}

(you can use wildcards to match multiple indices in /your_index)

And you may wish to read

Thanks @RainTown , I know generally about shards and the concept of replicas which is used for redundancy and speed but as I'm using an single node Ive never set up or configured my ELK stack to use replicas.

Is it something forced by a certain version update or what ? Cause im sure I did not set this ever!

replicas have been there for ever, and 1 replica was/is the default too, at least to my recollection. Disabling this come's up every now and again, e.g.

What maybe catches you out is that means your cluster state is yellow, which is not an issue for testing. But it a little bit fools you here as your cluster is going to be yellow for the unassigned shards reason, but your actual/original problem was you likely had no shards left, as the unassigned ones counted towards the 1000, which you (and frankly I) didn't initially realize. This likely inhibited the cluster from creating the .geoip* index, as it needs a shard for that too.

Hi @ethical20

And @RainTown thank you for the great responses

@ethical20

  1. Replicas have been a key concept since the first release of elastic, it is part of the core design for distributed nature of elastic.

  2. Yes, the default for number or replicas is 1; it has always been that way; leaving your single node cluster with missing replicas is OK, but they count in your shard total, as @RainTown pointed out. This has always been there you probably just did not notice until you ran into this.

  3. Yes you can set your replicas to 0 this way... this will just set the existing indices ... not new indices, you would need to update the templates for that.

PUT /_all/_settings
{
  "settings": {
    "number_of_replicas": 0
  }
}
  1. You are running a single node, which is fine for testing / debugging, but it is subject to data loss if there is data corruption or you, the host / vm, have issues. If this data is important to you you should be backing it up with snapshot and restore or adding more nodes for HA / resilience

There is some old documentation that might be of interest, the APIs / Examples are out of date but the concepts still hold true...

1 Like