Hey Everyone,
I'm having some trouble with my Logstash config for exporting Elasticsearch data to CSV after upgrading my ELK stack from 7 to 8.6.
When running my exporter.conf file it just never flushes to disk. It fills up the reserved memory and after some time exists with an out of heap memory error. Here is the config with private parts removed:
input {
elasticsearch {
hosts => ["elastic1", "elastic2", "elastic3"]
index => "example_index*"
query => '
{
"query": {
"bool": {
"must": [
{
"range": {
"@timestamp": {
"gte": "2023-01-01T00:00:00",
"lte": "2023-01-31T23:59:59"
}
}
}
],
"must_not": [
{
"match_phrase": {
"Origin.keyword": "something"
}
},
...
{
"match_phrase": {
"Origin.keyword": "something_else"
}
}
]
}
}
}
'
ca_file => "/etc/logstash/elasticsearch-ca.pem"
user => "user"
password => "Secure_Password_Trust_Me"
ssl => true
scroll => "60m"
size => 10000
slices => 4
}
}
output {
csv {
fields => [
"@timestamp",
...
"Log Type"
]
path => '/root/logstash/exporter/export_2023-01.csv'
}
}
I've tried specifying the flush_interval
and setting it to 2
which is the default, as well as trying 0
which should make it so every message is flushed to the disk as per the documentation. So far, nothing seems to work. Bear in mind, it's not a configuration issue as it hasn't been changed after the upgrade, and the documentation for the Elasticsearch input plugin, and the CSV output plugin remain the same.
Here is the command I use to run the exporter:
nohup /usr/share/logstash/bin/logstash -f /root/logstash/exporter/exporter.conf > f.out 2> f.err < /dev/null &
f.out file output:
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid11507.hprof ...
Heap dump file created [3001199901 bytes in 12.943 secs]
[FATAL] 2023-02-01 12:04:16.239 [[main]|input|elasticsearch|slice_0] Logstash - uncaught error (in thread [main]|input|elasticsearch|slice_0)
java.lang.OutOfMemoryError: Java heap space
I've truncated the f.out file output as the java error is quite large. If it would help, let me know and I'll edit the post.
f.err file output:
/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/manticore-0.9.1-java/lib/manticore/client.rb:527: warning: already initialized constant Manticore::Client::HttpEntityEnclosingRequestBase
/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/manticore-0.9.1-java/lib/manticore/client.rb:536: warning: already initialized constant Manticore::Client::StringEntity
/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/manticore-0.9.1-java/lib/manticore/client.rb:536: warning: already initialized constant Manticore::Client::StringEntity
/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/manticore-0.9.1-java/lib/manticore/client.rb:536: warning: already initialized constant Manticore::Client::StringEntity
Interstingly, if I send a SIGTERM to the background process it terminates without issues, and writes everything it's processed to the file. Which means it's not an issue of not being able to write to the destination file. It's also not SELINUX related since I've tested it while in PERMISSIVE mode.
[WARN ] 2023-02-01 11:50:43.375 [SIGTERM handler] runner - SIGTERM received. Shutting down.
[INFO ] 2023-02-01 11:50:43.898 [[main]>worker3] csv - Opening file {:path=>"/root/logstash/exporter/export_2023-01.csv"}
[ERROR] 2023-02-01 11:50:48.568 [Converge PipelineAction::StopAndDelete<main>] ShutdownWatcherExt - The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[INFO ] 2023-02-01 11:50:48.568 [Converge PipelineAction::StopAndDelete<main>] ShutdownWatcherExt - The queue for pipeline main is draining before shutdown.
[INFO ] 2023-02-01 11:50:57.480 [[main]-pipeline-manager] javapipeline - Pipeline terminated {"pipeline.id"=>"main"}
[INFO ] 2023-02-01 11:50:57.623 [Converge PipelineAction::StopAndDelete<main>] pipelinesregistry - Removed pipeline from registry successfully {:pipeline_id=>:main}
[INFO ] 2023-02-01 11:50:57.646 [LogStash::Runner] runner - Logstash shut down.
Anyways, long story short, I'm completely stumped as to what's happening so any ideas and help is greatly appreciated.
Thanks in advance!
Cheers,
Luka