Logstash is getting terminated after succesfully running 3-4 hours but not completing its job

We are running a pipeline that will fetch the data from oracle db table and send it to elasticsearch. We are using jdbc input plugin for that. As it is a one time activity we are running it in the following way

nohup logstash-6.8.2/bin/logstash -w 1 -b 1000 -f pipelines/pipeline-oracle_db.conf &

(We are using aggregation filter so that worker count is 1)

As it is an one time job, expectation is logstash will be getting closed once process is completed. But after running successfully for a couple of minutes (like 2 - 3 hours ) pipeline certainly got terminated without completing the job but logstash is running in background. Not outputs are going to ElasticSearch from Logstash.

[2019-12-04T15:13:43,318][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x14184562 run>"

input {
  jdbc {
    jdbc_connection_string => "{{ db_connection_string }}"
    jdbc_user => "{{ db_user }}"
    jdbc_password => "{{ db_password }}"
    jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
    jdbc_driver_library => "{{ data_volume }}/logstash-repo/lib/ojdbc6-11.2.0.3.jar"
    statement => "SELECT * from table"
    jdbc_fetch_size => 100000  
  }
}

filter {
    ruby {
          code => "
                  hash = event.to_hash
                  hash.each do |k,v|
                          if v == nil
                                  event.remove(k)
                          end
                  end
          "
    }
            
    aggregate {
        task_id => "%{id}"
        code => ""
        push_previous_map_as_event => true
        inactivity_timeout => 600
    }
    
    ruby {
        path => "{{ data_volume }}/logstash-repo/pipelines/filter/SubscriptionTypeFilter.rb"
    }
}    

output {
  elasticsearch {
    document_id => "%{id}"
    document_type => "subscription"
    index => "index_name"
    hosts => ["https://{{ es_node }}:{{ es_http_port }}"]
    user => "{{ es_user }}"
    password => "{{ es_password }}"
    template => "{{ data_volume }}/logstash-repo/templates/template.json"
    template_name => "template"
    template_overwrite => true
  }
  stdout {
    codec => rubydebug
  }
}

this is not pipeline. you are running one conf file from command line.

that means when all the data is pulled from oracle db. it is going to stop.
you might want to check on what impact inactivity_timeout has on this as well

inactivity time out is for aggregation function.I am running with 125 batch. It will persist more. Is there any issue with the memory size. When the logstash is running the memory occupancy is very high. Is it possible if we run with higher batch size the pipeline is being terminated due the lack of RAM space?

My logstash pipeline getting terminated after 6 hours of run. Is there any default timeout set in logstash property so terminate its pipeline ?

[2020-01-05T08:20:59,946][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x560d89ca run>"

this tells me somewhere on your ssh setting, somewhere in your network swtich setting, there is six hour timeout.
check with your network team to find out if they have such thing.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.