Logstash Event Limit

Hi Experts,

I'd like to inquire about logstash limits. I have an event coming in to logstash with about 6700 records however it doesn't process any of these and ignore it.

Is there any configuration I can adjust in order for this to be processed?

Regards,
Peter

Hello,

Just to add -
Here are the additional configurations we did while troubleshooting the issue (following the basic instruction from https://www.elastic.co/guide/en/logstash/current/performance-troubleshooting.html):

We tried adjusting our JVM Heap to 9Gb because we noticed that the heap usage is constantly reaching the allocated limit (as seen on attached image)
JVM_heap

We also increased our logstash pipeline workers to 20 (logstash.yml) even with our current resources:
CPU(s): 16
Core(s) per socket: 1
Socket(s): 16
MemTotal: 16249840 kB

But we still cannot see the data into logstash. Hoping for your advise.

Thanks,
Keeshia

What kind of data is it? What is the average size of a document? What does your config look like? Is there any errors that n the logs of Logstash or Elasticsearch (assuming you are sending data there)?

Hi Christian,

Thanks for the quick response. Please see below details:

  • We're getting transmission records from our database that isn't moving or in error status. Sample of our data are as follows:
29340694 N https://dsctmsr:443 ERROR DEFAULT SECONDARY MXPLAN 14-OCT-18
29340695 N https://dsctmsr2:443 ERROR DEFAULT SECONDARY MXPLAN 14-OCT-18
29340699 N https://dsctmsr2:443 ERROR DEFAULT SECONDARY MXPLAN 14-OCT-18
  • Also not that we're getting data with the same structure in other environments. however, the one in this particular production environment doesn't process data with a large amount of records.

  • Average size of document is about 1.39 KB. We got this number in the index management of Kibana by dividing the storage size to the total number of documents (1.2 MB / 865)

  • Logstash configuration is pretty long and can't be saved here. Is there anyway I can send that to you?

  • And yes, we are sending data to ES. No Errors on Logstash and ES logs.

Hope you can assist.

Regards,
Peter

You can post it as a gist or use some other similar service and just link to it here.

Hi Christian,

Here you go. logstash-config.conf

Regards,
Peter

Hi Christian,

The event-message went through with 1,600 records. So there really seems to be a limit on data volume logstash can process? Is it something we can configure? Appreciate your response.

Thanks,
Keeshia

You are setting index name and document id based on fields. Are these always present in the data? Could there be any issue with these resulting in invalid index names or document ids? I would recommend increasing the logging level in Logstash to DEBUG to see if you see something.

Hi Christian,

Yes. These are always present as the fields are primary keys in the database.

Also, we've set the logs to DEBUG since the start but we can't find anything related to our transmission index to fail processing the data. logstash-plain.log

Any ideas what might be causing this? It seems odd that logstash doesn't process 2000 to 6000 records but process only around 1600. Here's our yaml file as well logstash.yml

Regards,
Peter

I had another quick look at your config, and suspect you might have a few issues with it. As far as I can see you are dropping the message field for events where [check][name] is keepalive early on, but I do not see this explicitly caught in the section where document ids are created (although I do not know what the data looks like so it could have one of the other fields). If this goes to the default fingerprint id generation, all documents will get the same ID as the message field does not exist for these records.

An easy way to check this would be to look at the index statistics for deletes, as that can be an indication of updates being performed. You could also disable setting your own ID in the elasticsearch output and see if this makes a difference.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.