Older version of elasticsearch on VM, self-managed, Kibana is 8.17.1., Logstash 8.17.1, Elasticsearch 8.17.1. These run on plain old Ubuntu 20.04. 1 Kibana node, 1 Logstash node & 3 Elastic Seach (ES) nodes. All I did is what I normally do: apt-get update, then apt-get upgrade. That is applying typical update patches. Since the most recent round of patches, I get the error about "To use the full set of free features in this distribution of Kibana, please update Elasticsearch to the default distribution." As far as I I know, during the last 3 years of "security updates," I've never seen this, and I've always been on the "default (free) distribution." This sounds like a bug, where it maybe inadvertently tried to update ES to the "paid, licensed" version? Any help greatly appreciated - Thanks in Advance! And this is not the containerized version and I did use the standard "deb" install method, so that doing the patches typically leaves it in a stable state; versus installing from tar/zip package, you're on your own as far as keeping all nodes updated to same/similar releases.
Hi @tnjeff
Exactly What older version? exactly what distribution?
It seems like perhaps you are / were on a very old OSS version.. and then tried to upgrade
Elastic does not work that way... ...now...same distribution FREE / Paid it is just about applying the license. Long ago there was on OSS version
perhaps reload the systemd services
sudo /bin/systemctl daemon-reload
And you made sure the old processes have stopped.
Did you see this?
Yes, thanks. I stated above exactly what version I'm on and exactly what steps I took. (seems to be v 8.17.1 for all 3 pieces). The other person from a similar topic around 2023-2024, said that they got the same error and then thought maybe some pieces were on different versions - I have seen that, where Kibana will lag behind ES or vice-versa, then you get error similar to: "You must be running the same version of Kibana and ES..." That other person also said that, eventually, it 'started working again' after reboot. I never upgraded nor attempted to upgrade anything. Last time I tried to upgrade, everything broke, so I've stayed away from that. I definitely appreciate your insights. License (free) and so forth have been in place for almost 4 years. Again, ONLY upon doing the recent general Linux security updates is when it started giving this message. When we built the new setup, we did everything from scratch, and went solely with the "deb/rpm" method and it's been clean and stable ever since (2+ years).
Below is some cleansed info - keys removed -I redacted some info.
(from systemctl status kibana)
kibana.service - Kibana
Loaded: loaded (/lib/systemd/system/kibana.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2025-02-10 13:22:48 CST; 1h 11min ago
Docs: https://www.elastic.co
Main PID: 936 (node)
Tasks: 11 (limit: 4685)
Memory: 884.3M
CGroup: /system.slice/kibana.service
└─936 /usr/share/kibana/bin/../node/glibc-217/bin/node /usr/share/kibana/bin/../src/cli/dist
Feb 10 13:23:42 Mybox-Elk-Kibana kibana[936]: [2025-02-10T13:23:42.609-06:00][INFO ][plugins.reporting.config] Hashed 'xpack.reporting.encryptionKey' for this instance: [REDACTED]
Feb 10 13:23:44 Mybox-Elk-Kibana kibana[936]: [2025-02-10T13:23:44.477-06:00][INFO ][plugins.cloudSecurityPosture] Registered task successfully [Task: cloud_security_posture-stats_task]
Feb 10 13:23:46 Mybox-Elk-Kibana kibana[936]: [2025-02-10T13:23:46.097-06:00][INFO ][plugins.securitySolution.endpoint:user-artifact-packager:1.0.0] Registering endpoint:user-artifact-packager task with timeout of [20m], interval of [60s] >
Feb 10 13:23:46 Mybox-Elk-Kibana kibana[936]: [2025-02-10T13:23:46.098-06:00][INFO ][plugins.securitySolution.endpoint:complete-external-response-actions] Registering task [endpoint:complete-external-response-actions] with timeout of [5m] >
Feb 10 13:23:49 Mybox-Elk-Kibana kibana[936]: [2025-02-10T13:23:49.192-06:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. security_exception
Feb 10 13:23:49 Mybox-Elk-Kibana kibana[936]: Caused by:
Feb 10 13:23:49 Mybox-Elk-Kibana kibana[936]: cluster_block_exception: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
Feb 10 13:23:49 Mybox-Elk-Kibana kibana[936]: Root causes:
Feb 10 13:23:49 Mybox-Elk-Kibana kibana[936]: cluster_block_exception: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
Feb 10 13:23:51 Mybox-Elk-Kibana kibana[936]: [2025-02-10T13:23:51.076-06:00][INFO ][plugins.screenshotting.chromium] Browser executable: /usr/share/kibana/node_modules/@kbn/screenshotting-plugin/chromium/headless_shell-linux_x64/headless_
When I do "--version" against kibana, it shows some interesting stuff: like apm-node "ecs.version 8.10.0", but near the bottom, shows 8.17.1, so is that significant?
root@[mybox]:/home/myadmin# /usr/share/kibana/bin/kibana --version --allow-root
{"log.level":"info","@timestamp":"2025-02-10T20:40:11.042Z","log.logger":"elastic-apm-node","ecs.version":"8.10.0","agentVersion":"4.10.0","env":{"pid":30668,"proctitle":"/usr/share/kibana/bin/../node/glibc-217/bin/node","os":"linux 5.15.0-1079-azure","arch":"x64","host":"mybox-kibana","timezone":"UTC-0600","runtime":"Node.js v20.15.1"},"config":{"active":{"source":"start","value":true},"breakdownMetrics":{"source":"start","value":false},"captureBody":{"source":"start","value":"off","commonName":"capture_body"},"captureHeaders":{"source":"start","value":false},"centralConfig":{"source":"start","value":false},"contextPropagationOnly":{"source":"start","value":true},"environment":{"source":"start","value":"production"},"globalLabels":{"source":"start","value":[["kibana_uuid","f0a1b780-fa13-46c7-9c9b-6679d55045d9"],["git_rev","9b07116468368c418abf167729c8417c181f8700"]],"sourceValue":{"kibana_uuid":"f0a1b780-fa13-46c7-9c9b-6679d55045d9","git_rev":"9b07116468368c418abf167729c8417c181f8700"}},"logLevel":{"source":"default","value":"info","commonName":"log_level"},"metricsInterval":{"source":"start","value":120,"sourceValue":"120s"},"serverUrl":{"source":"start","value":"https://kibana-cloud-apm.apm.us-east-1.aws.found.io/","commonName":"server_url"},"transactionSampleRate":{"source":"start","value":0.1,"commonName":"transaction_sample_rate"},"captureSpanStackTraces":{"source":"start","sourceValue":false},"secretToken":{"source":"start","value":"[REDACTED]","commonName":"secret_token"},"serviceName":{"source":"start","value":"kibana","commonName":"service_name"},"serviceVersion":{"source":"start","value":"8.17.1","commonName":"service_version"}},"activationMethod":"require","message":"Elastic APM Node.js Agent v4.10.0"}
8.17.1
Ok... What did you mean by this.....
Latest is 8.17.1 so what was older... that was my question...what version did you come from
It looks like perhaps Elasticsearch is not running properly...
How did you verify that elasticsearch is running properly and all nodes are upgraded and accounted for...
Perhaps take a look at these elasticsearch endpoints
curl https://localhost:9200/
curl https://localhost:9200/_cluster/health
curl https://localhost:9200/_cluster/state?pretty
Oh sorry. Good point - I was confused. I meant PRIOR to this, we were on an older version for quite some time - 8.6.1 i think? Sorry for the confusion.
And, by that, I mean I never initiated any recent upgrades of the specific versions but, as in the past, I've seen that normal Linux updates can indeed cause an "upgrade" to a newer ES and related component versions. It could be that it just now - recently - upgraded on last Thursday or whenever I ran the updates. At one point, I thought I had done the thing were you can tell it to "hold at the current version," but it looks like I never did that.
So... it may have been on version 8.10.0 for some time - ? I suppose some of the logs and/or downloads may tell the full story of which version got applied at which time - ? Would there be a decent way to tell, from looking at logs and downloads over the past 3-4 months? That part is more of a curiosity; I'll check the specific ES node versions like you said - though I did a "--version" against each executable (kibana, logstash and ES) and it all appeared to indicate 8.17.1 - maybe the ES nodes didn't fully upgrade? I'll look and see what I get back. Thank you!
Er, what repositories to you have defined? Please check and share which files you have under /etc/apt/
AFAIK Ubuntu 20.04 did not ship with elasticsearch / kibana / logstash OOTB. So somehow you, or a predecessor, will have added them. Maybe you made/kept some documentation?
Overall, you seem a little lost. Software will updates itself if you tell it to do so, or sometimes it can even be default behavior, that you might typically disable. If you are running the commands above, you are instructing it to update software. It's not an accident. If you have not understood that, then that's on you really.
Please avoid expressions like "normal Linux updates", be more specific if you can, as that could can mean 101 things.
I've seen that normal Linux updates can indeed cause an "upgrade" to a newer ES and related component versions.
Er, yes, if you told it so.
8.10.0 / 8.6.1 / ... ? Perhaps you can search your log files here, when you upgrade packages via apt tools, that version upgrade is typically logged. I dont recall where in 20.04, in 24.04 I used this:
# PACKAGE="google-chrome" && { zcat /var/log/apt/term.log.1.gz ; cat /var/log/apt/term.log ; } | egrep "Log started|Unpacking ${PACKAGE}" | fgrep -C1 "${PACKAGE}"
Log started: 2025-01-03 20:57:44
Unpacking google-chrome-stable (131.0.6778.204-1) ...
Log started: 2025-01-03 20:58:22
--
Log started: 2025-01-17 17:45:02
Unpacking google-chrome-stable (132.0.6834.83-1) over (131.0.6778.204-1) ...
Log started: 2025-01-17 17:46:25
--
Log started: 2025-01-24 20:53:17
Unpacking google-chrome-stable (132.0.6834.110-1) over (132.0.6834.83-1) ...
Log started: 2025-01-26 00:00:56
--
Log started: 2025-01-30 16:06:12
Unpacking google-chrome-stable (132.0.6834.159-1) over (132.0.6834.110-1) ...
Log started: 2025-01-30 18:00:13
--
Log started: 2025-02-06 23:45:27
Unpacking google-chrome-stable (133.0.6943.53-1) over (132.0.6834.159-1) ...
Log started: 2025-02-07 23:42:20
showing (eg) when package google-chrome has been updated on my system. But typically those log files cycle very quickly.
Anyways, thats all a bit irrelevant, the point is to try to get you back to a working system, right? Output of
apt list | egrep '^(elasticsearch|kibana|logstash)'
on your various systems would maybe help. And we wont be able to help until you post the response to the curl requests @stephenb requested.
Thanks again. I ran out of time, will respond when back at my desk. Correct, none of this came OOB.
FYI, when I say regular updates, it's just apt-get update, followed by apt-get upgrade. And as I indicated, I just now verified I did indeed Exclude ES, Logstash and Kibana from being ugraded, but sadly, my exclusions were reverted, so the components did get upgraded, which is not a bad thing.
Aha! Right, I clarified on 'regular updates,' i did indeed have all ELK components excluded from being updated since, due to a developer, he requested we stay on 8.6.1 - i think that was the version. But the exclusion did get reverted, so then, when I ran the updates, we indeed got upgraded to 8.10.0. And that worked fine. As for validating, I always do "systemctl status kibana" (and logstash and elasticsearch, respectively) to make sure they are showing up as status active/running. As well I login from the Kibana UI, and that will definitely tell me if something is wrong. Additionally, I often look further with "journalctl," etc.
Best way IMO to stop apt updating things, remove the repo from /etc/apt/... (or dont put it there in first place). But this is personal taste.
That is excellent advice. This was a quickly put-together setup, so you are on-point with those comments! Best to properly remove it, if we want to go by our developers' needs and keep to a certain version. At some point, we have to make the choice and move forward to newer versions, so this actually is a good thing that the upgrade(s) happened!
Thanks Kevin, you and Stephen Brown basically gave me the answer! Yours a bit more clearly allowed me to fix the issue - basically, as I listed out the versions with the "egrep" command you mentioned, I saw that both Kibana and Logstash were 8.17.2, but the ES nodes were only 8.17.1, so, BOOM, as soon as I finished running the latest 8.17.2 update on the 3 ES nodes, everything came back to normal! I really owe you guys a debt of gratitude! Thank you!
Glad we could help.
Every day is a school day.
You are so right! If we ever stop learning, then we stagnate. Also, I spoke a bit too soon. The ELK cluster still is having issues. I'll start another thread. It's acting as if it no longer can find the cert, when I try to login Kibana. I logged in Kibana briefly, and all looked okay, except that, for 8.17, it apparently wants to "migrate default Kibana monitors," so I did click on that, and it gives a "POST" command that it recommends, to shut down the monitors before migrating them. I didn't fully follow through, since I wasn't yet able to issue that POST command on all the nodes, but that's the only thing I saw different from logging in to Kibana in the versions before 8.17. I can open another thread and post results of egrep an such.
Turned out something in 8.16 upgrade caused reversion of "certs" folder to be partly owned by root, so I had to
sudo chown -R elasticsearch:elasticsearch /etc/elasticsearch/certs
sudo chmod 0400 /etc/elasticsearch/certs/*
I went ahead & did that on Kibana (kibana:kibana for the above) and on the 3 ES nodes.
Now, I simply get login errors, seem to be related to some of the version differences among 8.6.1, 8.10.1 and 8.16.1. I'll see if I can work through that. One related error was "can't bulk-upload stats" - meaning the cluster's own internal stats. Cheers!
Login errors were due to Elasticsearch nodes running out of memory. After a couple of reboots, they came back to a normal 2GB memory usage out of about 4GB. Total capacity is 4GB per node, and I have it set at 50% in the jvm.options. Thanks again guys - due to your guidance, I was able to get the cluster back to a normal state.