I have found myself as an administrator in an environment where ELK stack is used for log aggregation. We need to be pulling in Syslog data from our Network devices, virtualization cluster, and logs from our Windows / Linux machines using Elastic Agents.
A previous administrator had setup ELK stack on a Windows 2019 server. Here is some of the only documentation left behind pertaining to upgrading the ELK components, in an air gapped scenerio.
Elastic Upgrade
-
Download Elasticsearch, Kibana and Logstash from their website
-
Place downloads in S:\Elastic\
-
Log into Server-ELS01
-
Unzip each ZIP file to C:\ElastickStack (They will have folder names like (-)
-
Stop all elastic Services
-
Rename Elastic Folders to .old
-
Elasticsearch → Elasticsearch.old
-
Kibana → Kibana.old
-
Logstash → Logstash.old
-
Rename new folders to original folder name
-
Elasticsearch-x.x.x → Elasticsearch
-
Kibana-x.x.x → Kibana
-
Logstash-x.x.x → Logstash
-
Reinstall the elastic service with the commands: (If I don't do this I get Java errors on startup)
-
cd \ElastickStack\elasticsearch\bin
-
elasticsearch-service.bat remove
-
elasticsearch-service.bat install
-
Set the service to automatic startup
-
Start the Elasticsearch service
-
Watch logs for errors in c:\Programdata\Elastic\Elasticsearch\logs
-
Once running, start the Kibana service
-
This will take a few minutes
-
Watch logs for errors in c:\Programdata\Elastic\kibana\logs
-
You should see and entry in the log "[INFO ][status] Kibana is now available"
-
You can validate by logging into the web interface
Once running, start the Logstash service
Watch logs for errors in c:\Programdata\Elastic\logstash\logs
Java isn't even installed on the server. I am confused because there are elastic binaries that live under %programdata%, but the upgrade instructions show to update the elastic binaries in C:\ElasticStack where these files do exist. I need some general guidance on how you would do this basic single node deployment on windows 2019 so that I can try to piece this together. It seems like things are configured to send Elasticsearch data to the D drive and then backup snapshots onto our network storage, but also seems like logstash is not properly moving data from the queue.
Has anyone had experience with this basic architecture deployment who can explain generally how things should work? Resources you would point me to?