Logstash 6.4.0 stuck on: Successfully started Logstash API endpoint {:port=>9600}

Hi,

I use Jenkins to update two different indexes on one server with two different Logstash configuration scripts. A custom Python script ssh's into the server and starts Logstash with the appropriate yml config file and then exits. It does this twice each day, one for each index.

This was working great for months until a few days ago when all of a sudden it stopped. Jenkins just sits on the line: "Successfully started Logstash API endpoint {:port=>9600}". I have not changed any settings. This came out of nowhere.

Here's the stuck command:

Command:

/home/ubuntu/logstash-6.4.0/bin/./logstash -f /home/ubuntu/logstash-6.4.0/config/logstash.conf --path.data differentpath-data

Here's my config file:

input {
    jdbc {
        jdbc_connection_string => "jdbc:sqlserver://asdfasdf.asdfasdf.rds.amazonaws.com
:1433;databaseName=asdfasdf"
        jdbc_user => "asdfasdf-elastic-asdfasdf"
        jdbc_password => "asdfasdf"
        jdbc_driver_library => "/home/ubuntu/logstash-6.4.0/config/sqljdbc42.jar"
        jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
        statement => "SELECT blah QUERY"
    }
}

output {
    elasticsearch {
        hosts => ["http://localhost:9200/"]
        index => "asdfasdf-2"
       action => "index"
    }
}

Any idea why this would just stop working? I was getting a path.data error for a bit but deleting the differentpath-data and restarting worked...

Thanks

Usually when that happens it's because it's waiting to receive data via the input. What if you enable debug?

ubuntu@ip-10-33-2-54:~$ telnet my0-server.rds.amazonaws.com 1433
Trying 10.33.2.216...
Connected to ec2-434343-3434343.compute-1.amazonaws.com.
Escape character is '^]'.

It can connect.

--debug just shows this over and over:

[2019-04-20T21:45:44,456][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x5d062070 sleep>"}
[2019-04-20T21:45:46,088][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-04-20T21:45:46,088][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-04-20T21:45:49,458][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x5d062070 sleep>"}
[2019-04-20T21:45:51,094][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-04-20T21:45:51,094][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-04-20T21:45:54,458][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x5d062070 sleep>"}
[2019-04-20T21:45:56,100][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-04-20T21:45:56,100][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-04-20T21:45:59,459][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x5d062070 sleep>"}
[2019-04-20T21:46:01,105][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-04-20T21:46:01,106][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-04-20T21:46:04,459][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x5d062070 sleep>"}
[2019-04-20T21:46:06,111][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-04-20T21:46:06,112][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}

I have the same problem.

From /usr/share/logstash I execute:

./bin/logstash -f /home/elastic/Downloads/test.conf

This is the conf file:

input {
  
  file {
    codec => "json"
    path => "/home/elastic/Downloads/test.json"
    start_position => "beginning"
    sincedb_path => "/dev/null"
  }
}
output {
        elasticsearch {
                index => "os_test"
                document_id => "id"
                manage_template => false
                action => create
                failure_type_logging_whitelist => ["version_conflict_engine_exception"]
        }
}

And this is JSON file:

[
	{
		"id": "8fc56623-6e84-46c0-9d26-c52cef38ecb9",
		"instant": "2019-03-30T03:14:14.007Z",
		"session_id": "",
		"user_id": 0,
		"espace_id": 0,
		"espace_name": "",
		"message": "Scheduler Service: Error fetching emails ",
		"stack": "[1] Execution Timeout Expired.\r\n",
		"module_name": "Scheduler",
		"server": "XPTO01",
		"cycle": 4,
		"environmentinformation": "eSpaceVer: 0 (Id=0, PubId=0, CompiledWith=10.0.904.0)\r\n",
		"name": "Errors"
	}
]

What's your output?

I have no output. I got stuck on " Successfully started Logstash API endpoint {:port=9600}" message.

Add --debug and paste the output.

Anyone have any suggestions?

The database server was out of memory. I could login but the query wouldn't run. Rebooting it and freeing up memory was the solution.

You were right, thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.