I have confugured Logstash to feed Elastic search indexes.
Here we have integrated with postgresql DB and reading data using timestamp field.
There are multiple (10/12) pipeline has been configured to read latest data from various tables using simple select command,
update it into Elasticsearch corresponing indexes,
and save lastly collected date timestamp into metadata files.
Pipelines has scheduled configuration to run every 1 hr to collects latest data.
All goes well for some times (5 / 6 Hr to 1/2 days) and then,
data collection happens with appropriate schedule (pipeline sqls with new time field in log).
No Error in Logstash INFO log.
Logstash Metadata files are getting updated properly.
But .... data not in Elasticsearch indexes. Looks Like Elasticsearch never receives data.
I associated a output file plug in to check data and this also not getting updated.
We are using Kibana and Elasticsearch (7.1.1) as AWS service so cant check whats going on in Elasticsearch end.
And Logstash 6.8 as recommended by AWS support to push Data.
I am little confused with this behavior. Data getting lost somehow because output plugins are malfunctioning or not working without letting us any clue.
Can anyone suggest any Log-stash level configuration to overcome this ?
What I can understand so far is, if metadata files are getting updated then SQL are getting latest data. but as output plugins are malfunctioning those are lost some how and not undated into Elastic search and in output files also.
This looks like a dead thread as no one updating it.
I don't wish to assume it a logstash bug before any one please conclude me if this kind of experience already happens with other deployments ?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.