Consistent restarts of logstash with high CPU % recorded

Morning All,

I've installed logstash and I am using a pipe to pull data from Postgres via JDBC into Elasticsearch. When running via systemctl e.g.:

sudo systemctl start logstash.service

I see that logstash starts and runs my job, however I then start to see the service stopped and restarted over and over as per the logs below:

May 22 05:56:13 DE-3628-SDN logstash[23538]: Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
May 22 05:56:19 DE-3628-SDN systemd[1]: logstash.service: Service hold-off time over, scheduling restart.
May 22 05:56:19 DE-3628-SDN systemd[1]: Stopped logstash.
May 22 05:56:19 DE-3628-SDN systemd[1]: Started logstash.
May 22 05:56:33 DE-3628-SDN logstash[23674]: Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
May 22 05:56:39 DE-3628-SDN systemd[1]: logstash.service: Service hold-off time over, scheduling restart.
May 22 05:56:39 DE-3628-SDN systemd[1]: Stopped logstash.
May 22 05:56:39 DE-3628-SDN systemd[1]: Started logstash.
May 22 05:56:53 DE-3628-SDN logstash[23756]: Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
May 22 05:56:59 DE-3628-SDN systemd[1]: logstash.service: Service hold-off time over, scheduling restart.
May 22 05:56:59 DE-3628-SDN systemd[1]: Stopped logstash.

When checking the logstash logfiles I see the following:

[2017-05-22T06:04:38,033][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2017-05-22T06:04:38,242][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x1ecd22b URL:http://localhost:9200/>}
[2017-05-22T06:04:38,245][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-05-22T06:04:38,325][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_$
[2017-05-22T06:04:38,330][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x186681 URL://localhost:9200$
[2017-05-22T06:04:38,337][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inf$
[2017-05-22T06:04:39,103][INFO ][logstash.pipeline        ] Pipeline main started
[2017-05-22T06:04:39,193][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2017-05-22T06:04:39,204][INFO ][logstash.inputs.jdbc     ] (0.097000s) select planet.id, planet.x || ':' || planet.y || ':' || planet.z coords, planet.x, planet.y, planet.z ,planetname,ru$
[2017-05-22T06:04:42,126][WARN ][logstash.agent           ] stopping pipeline {:id=>"main"}
[2017-05-22T06:04:59,460][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2017-05-22T06:04:59,469][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2017-05-22T06:04:59,625][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x100fd52 URL:http://localhost:9200/>}
[2017-05-22T06:04:59,627][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-05-22T06:04:59,684][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_$
[2017-05-22T06:04:59,689][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x1957d77 URL://localhost:920$
[2017-05-22T06:04:59,696][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inf$
[2017-05-22T06:05:00,519][INFO ][logstash.pipeline        ] Pipeline main started
[2017-05-22T06:05:00,610][INFO ][logstash.inputs.jdbc     ] (0.083000s) select planet.id, planet.x || ':' || planet.y || ':' || planet.z coords, planet.x, planet.y, planet.z ,planetname,ru$
[2017-05-22T06:05:00,616][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2017-05-22T06:05:01,912][WARN ][logstash.runner          ] SIGTERM received. Shutting down the agent.
[2017-05-22T06:05:01,917][WARN ][logstash.agent           ] stopping pipeline {:id=>"main"}

The indexes are being created perfectly fine and other than seeing the restarts I wouldn't care but the CPU is through the roof as per the following screenshot (in fact it must be also reporting incorrect with that high number):

My logstash pipeline file is set for a schedule to run on the 5th minute of every hour. I would have expected logstash to idle whilst it is waiting for this time, however it appears to be running every few seconds as per the logs shown above.

My pipe config file is:

input {
    jdbc {
        # Postgres jdbc connection string to our database, mydb
        jdbc_connection_string => "jdbc:postgresql://localhost:xxxx/xxxx"
        # The user we wish to execute our statement as
        jdbc_user => "xxxx"
        jdbc_password => "xxx"
       # The path to our downloaded jdbc driver
        jdbc_driver_library => "/usr/share/java/postgresql-jdbc4.jar"
        # The name of the driver class for Postgresql
        jdbc_driver_class => "org.postgresql.Driver"
        # Schedule for input
        #schedule => "5 * * * *"
        # our query
        statement => "select planet.id, planet.x || ':' || planet.y || ':' || planet.z coords, planet.x, planet.y, planet.z ,planetname,rulername,race,planet.size,planet.score,planet.value$
    }

}
output {
elasticsearch {
        hosts => [ "localhost:9200" ]
        index => "universe"
        document_type => "planet"
        document_id => "%{id}"
    }
}

JVM config options are set to:

-Xms256m
-Xmx1g

This is a fresh install using debian package on Ubuntu 16.

Any ideas why it is restarting so much and why the schedule is not being adhered to?

Your schedule is commented out, so that would make it run according to the default schedule, which I believe is every minute. It also seems that you are importing all records in the table every time. Is this intended? How large is the table?

Silly me! Messing around with settings forgot to change that one. I'll give it a go and see if it alters things.

My database takes a batch update on the hour with only around 800 records returned from the query specified so I'm not too worried about the full update with such few rows. I'm essentially waiting for these to be added to postgres and then I pull them out into elastic after 5minutes.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.