My logs after the start logstash
elasticsearch.log
logstash-plain.log
gc.log
[2023-01-17T11:12:34,472][INFO ][logstash.config.source.local.configpathloader] No config files found in path {:path=>"/etc/logstash/conf.d/*.conf"}
You don't have any file there. Save the sample from above as "/etc/logstash/conf.d/test.conf"and restart LS.
I did it. The result is the same. I reloaded the logs. The links are the same.
Can you:
ls -l /etc/logstash/conf.d/
You need to increase the memory of your server or set the memory for both logstash and elasticsearch in their respective jvm.options
file.
For logstash you need to edit the file /etc/logstash/jvm.options
, find the part where you have Xmx
and Xms
configuration and set the memory there.
You may use:
-Xms1g
-Xmx1g
For Elasticsearch you need to add a new file in the path /etc/elasticsearch/jvm.options.d/
with the following content.
-Xms1g
-Xmx1g
This will make both Logstash and Elasticsearch use only 1 GB of RAM.
total 4
-rw-r--r-- 1 root root 299 Jan 17 12:30 config.conf
Your error is that your server does not have enough memory to run all the stack without you fixing the memory of both Elasticsearch and Logstash.
You do not have any config error, logstash can load your config as you can see in the following lines:
[2023-01-17T13:21:02,883][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/config.conf"], :thread=>"#<Thread:0xa3d9824 run>"}
[2023-01-17T13:21:03,485][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.6}
[2023-01-17T13:21:03,523][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2023-01-17T13:21:03,543][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
But it cannot run connect to Elasticsearch because elasticsearch is not running:
[2023-01-17T13:21:07,822][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@127.0.0.1:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://127.0.0.1:9200/][Manticore::SocketException] Connect to 127.0.0.1:9200 [/127.0.0.1] failed: Connection refused"}
[2023-01-17T13:21:12,827][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to 127.0.0.1:9200 [/127.0.0.1] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to 127.0.0.1:9200 [/127.0.0.1] failed: Connection refused>}
Your system is killing your Elasticsearch process, as you can see in the log below:
Jan 17 11:04:54 log systemd[1]: elasticsearch.service: A process of this unit has been killed by the OOM killer.
Jan 17 11:04:54 log systemd[1]: elasticsearch.service: Failed with result 'oom-kill'.
Try to do what I explained in the previous answer, fix the memory for Elasticsearch and Logstash and check if both services will run.
Thanks.
I copy /etc/elasticsearch/jvm.options to /etc/elasticsearch/jvm.options.d/ and add
-Xms1g
-Xmx1g
into this file
elasticsearch not stop but when I run systemctl status elasticsearch.service I see
log systemd-entrypoint[971]: [0.002s][warning][logging] Output options for existing outputs are ignored.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.