Hi,
Logstash version 5.6.2 and the config is below:
input {
jdbc {
jdbc_driver_library => "C:\Program Files\Java\sqljdbc_6.2\enu\mssql-jdbc-6.2.1.jre8.jar"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_connection_string => "jdbc:sqlserver://xxxxxxxx;databaseName=xxxxxxxx"
jdbc_validate_connection => true
jdbc_user => "xxxxxx"
jdbc_password => "xxxxxx"
statement => "SELECT * FROM [Database].[dbo].[Table] with (NOLOCK) where [RecordedTime] > '2017-10-13'"
type => "CustomType"
jdbc_fetch_size => 15
}
}
output {
elasticsearch
{
hosts => ["10.34.29.37:9200","10.34.29.12:9200","10.34.29.50:9200"]
index => "test-%{+YYYY.MM.dd}"
document_type => "CustomType"
document_id => "%{id}"
http_compression => true
user => xxxxxxx
password => xxxxxxx
}
stdout { codec => rubydebug }
}
I think it is Elastic Search rejecting the bulk request due to its size but Logstash is retrying the request hence then getting stuck in an endless loop. I came to this conclusion as I upgraded to 5.6.3 and got a different error, more info is here