Logstash persistent queue empty but all events coming in fine

I have configured logstash to use persistent queue.
As per documentation,
"When an input has events ready to process, it writes them to the queue."

I was expecting to see some files in the folder '/Data/logstash/data/queue'. But were none during the whole run.

There is nothing wrong in functionality. All the expected events did come in through. So I was wondering how the implementation is designed.

Is it that only when current processing capacity overwhelmed that the files will be written in the queue folder? Or else by design every event has to be written on the queue before it is picked up by filters+outputs?
If so I should have seen some files. Or is my configuration wrong?

Details
Version: 7.7.0
16 GB machine with 8GB allocated to Logstash via /etc/logstash/jvm.options
Logstash.yml contents:

# ------------ Data path ------------------

path.data: /Data/logstash/data

# ------------ Queuing Settings --------------

queue.type: persisted
# path.queue:
# queue.page_capacity: 64mb
# queue.max_events: 0
queue.max_bytes: 300gb
# queue.checkpoint.writes: 1024
# queue.checkpoint.interval: 1000

Guys any ideas? Should I reduce the RAM allocated to Logstash to just be able to see the files getting created?

Hi @pk.241011 .... We're not all guys... :wink:

Couple thoughts...

You could probably just start Logstash with bad creds to Elasticsearch let it run a few minutes I would think that would work...

You could create a Logstash user (which you really should anyways) let it run... Change the password in elasticsearch reload the Logstash config (break it simulate loss of connectivity ) then change it back... simulate reconnect.

Hi,

Well here was the first attempt at creating a negative test condition. I put in wrong creds in Elasticsearch output of logstash.

The output from logstash was:

[WARN ] 2021-03-30 11:19:30.805 [Ruby-0-Thread-5: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://devops:xxxxxx@elasticinstance:9111/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticinstance:9111/'"}

And nothing still in the /Data/logstash/data/queue folder. And this time the data was lost.

I see 401 error code. As per documentation this should have been retried.

Hi @pk.241011

How are you starting logstash. Perhaps you have a permission issues with the data dir. It should get initialized at startup. BTW that is a pretty big queue you have set up 300gb... perhaps just try the defaults first?

I changed 1 line

# Internal queuing model, "memory" for legacy in-memory based queuing and
# "persisted" for disk-based acked queueing. Defaults is memory
#
queue.type: persisted

I removed the logstash data directory to make sure I started clean

I started a local Elasticsearch and ran this conf which reads the license file.

##################################
# Read License file
##################################
input {
  file {
    path => "/Users/sbrown/workspace/elastic-install/7.11.2/logstash-7.11.2/LICENSE.txt"
    start_position => "beginning"
    sincedb_path => "/dev/null"
  }
}

output {
  # pump to stdout for debug
  stdout {codec => rubydebug}

  elasticsearch {
    hosts => ["localhost:9200"]
    index => "test-persistent-queue"
  }
}

I used this command the -r means I can reload the conf and it will re-read the data / License File

sudo ./bin/logstash -r -f ./simple-file.conf

Logstash ran as expected and loaded 223 documents / lines

The data/queue directory is initialized... if you are not seeing that that is a problem. Are there any errors in your logstash startup logs? How are you starting elasticsearch?

ceres:logstash-7.11.2 sbrown$ ls -lR data/queue/
total 0
drwxr-xr-x  6 root  staff  192 Mar 29 19:35 main/

data/queue//main:
total 128
-rw-r--r--  1 root  staff        34 Mar 29 19:35 checkpoint.head
-rw-r--r--  1 root  staff  67108864 Mar 29 19:35 page.0
ceres:logstash-7.11.2 sbrown$ 

BTW then I stopped Elasticsearch,

then re-saved the conf file which cause the data to be re-read and logstash could not send....

[2021-03-29T19:40:17,785][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}

Then the I re-started Elasticsearch did not touch logstash and once logstash reconnected the data was sent I had 446 documents

[2021-03-29T19:41:47,145][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}

Yup you fixed it for me. :grinning:
It was permission issue. I was testing it on command line with root user. I should have been using the logstash user.

Thanks for the help.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.