Logstash output plugin not working without any error

I have confugured Logstash to feed Elastic search indexes.
Here we have integrated with postgresql DB and reading data using timestamp field.
There are multiple (10/12) pipeline has been configured to read latest data from various tables using simple select command,
update it into Elasticsearch corresponing indexes,
and save lastly collected date timestamp into metadata files.

Pipelines has scheduled configuration to run every 1 hr to collects latest data.

All goes well for some times (5 / 6 Hr to 1/2 days) and then,

data collection happens with appropriate schedule (pipeline sqls with new time field in log).
No Error in Logstash INFO log.
Logstash Metadata files are getting updated properly.

But .... data not in Elasticsearch indexes. Looks Like Elasticsearch never receives data.
I associated a output file plug in to check data and this also not getting updated.

We are using Kibana and Elasticsearch (7.1.1) as AWS service so cant check whats going on in Elasticsearch end.
And Logstash 6.8 as recommended by AWS support to push Data.

I am little confused with this behavior. Data getting lost somehow because output plugins are malfunctioning or not working without letting us any clue.

Can anyone suggest any Log-stash level configuration to overcome this ?

What I can understand so far is, if metadata files are getting updated then SQL are getting latest data. but as output plugins are malfunctioning those are lost some how and not undated into Elastic search and in output files also.

one of the sample pipeline is as below (expected 10/12 records per execution).

input {
    jdbc {
        # Postgres jdbc connection string to our database, testingdb
        jdbc_connection_string => "jdbc:postgresql://testserver1:1769/aggtestDB"
        # The user we wish to execute our statement as
              jdbc_user => "aggtest_owner"
              jdbc_password => "aggtest123"
        # The path to our downloaded jdbc driver
        jdbc_driver_library => "/logstash/logstash-6.8.7/externalJars/postgresql-42.2.9.jar"
        # The name of the driver class for Postgresql
        jdbc_driver_class => "org.postgresql.Driver"
        # our query
        statement => "select starttime,hlr,region,party,usage,user1,user2 from aggtestDB.rm_resource1_detail 
                      where starttine > :sql_last_value order by starttime asc"
        use_column_value => true
        tracking_column => "starttime"
        tracking_column_type => "timestamp"
        last_run_metadata_path => "/logstash/logstash-6.8.7/metadata/.postgre_pipe_prod1_node1_rm_resource1_detail"
        schedule => "1 */1 * * *"
    }
}

filter {
        mutate {
            add_field => { "systemIdentifier" => "prod1_node1" }
            add_field => { "CollectionTime"   => "%{@timestamp}"}
        }
}

output {
    stdout {
        codec => rubydebug
    }

file {
        path => "/logstash/logstash-6.8.7/file_date/rm_resource1_detail-debug-%{+YYYY-MM-dd}"
    }
    elasticsearch {
        hosts => "sample-vpc-prod-southeast-1.es.amazonaws.com:443"
        index => "rm_resource1_detail-%{+YYYY.MM.dd}"
        ssl => true
    }
}

This looks like a dead thread as no one updating it.
I don't wish to assume it a logstash bug before any one please conclude me if this kind of experience already happens with other deployments ?

Or shall I still expect some suggestion please ?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.