Logstash 7.2 takes about atleast 5 mins to update ES 7.2

Hi,
I have implemented a ELK setup of 7.2 version.
Below is my logstash conf

 input {
        file {
        path => "/etc/logstash/conf.d/mytest.txt"
        start_position => "beginning"
        sincedb_path => "/dev/null"
        }
        }
        output {
           elasticsearch {
               hosts => ["<esIP>:9200"]
               index => "text"
               doc_as_upsert =>true}
               stdout { codec => rubydebug }
        }`

the contents of the text file is just 4 lines for testing:

this is a test text
for logstash updation
adding another line
this is to check updates
should have 2 more doc counts

My logstash is running as a service and it takes 5 mins to update/push this data to the ES server (different servers).
Why is it not reflecting immediately for such small amount of data?
How do I improve this?

Hi Guys,
Additional information,
There seems to be no issues visible in the ES/Logstash logs.
In fact, no logs are getting generated when data update is happening.
Only when I restart a service, Logs get generated.

Kindly help me out if you have any idea about the above!

Thanks in advance :slight_smile:

The file will be read and processed when Logstash starts up. Unless you add to it or change it it will not be reprocessed. If you are updating the file, how do you do this? It would be helpful if you described the exact steps you are taking and what you are seeing.

@Christian_Dahlqvist,
Thanks for helping me out.
Here's what I do

  1. Installed logstash 7.2 as a service through yum-rpm.
  2. Pipelines.conf maps execution path to all /conf.d/*.conf
  3. Create a test.conf in the /conf.d location whose input is to read from a text file and output is to index it in elasticsearch.
  4. Start service logstash.
  5. Go to destination elastic search and curl -XGET http://elasticIP:9200/_cat/indices/?v
    At step 5, an index is created.
  6. Go to the text file that logstash is reading, make an update, while logstash service is still running.
  7. Go to elastic search and curl again, only after 5 mins, the index is updated.

How are you updating the file? Logstash keeps track if how far it has read and keeps tailing the file. If you make changes to data already read it will not trigger a reprocessing unless you use an editor and it in effect shows up as a new file with the same name.

I am using a vi mytest.txt to that file and adding a few lines, that's it, I understand it wouldn't just upsert but entirely read the file again and process it. This is just a test and I don't have a primary key so I'm okay with it. What I do want to avoid is the 5 min delay, which I'm not sure of the reason. @Christian_Dahlqvist

have you specified any non-standard refresh interval for the index, e.g. through an index template? Can you show us the index settings?

Hi @Christian_Dahlqvist, apologies for the delayed response,
I see a term
# pipeline:
# batch:
# size: 125
# delay: 5
#
# Or as flat keys:
#
# pipeline.batch.size: 125
# pipeline.batch.delay: 5

in the logstash yml file, it is commented out, but however, could this be the default setting? does batch.delay: 5 mean 5 mins?

@Christian_Dahlqvist, I tried changing the above value to 0, and updated the data again, still takes me 5 mins to reflect, please help me out.

Hi, Kindly help me out with this. I'd hate to be a disturbance, but this could be a really helpful solution to me!
Thanks in advance!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.