Tell me, please, can anyone come across such a problem: when unloading data from the database in elasticsearch, an error occurs
[ERROR] 2022-10-31 16:02:14.628 [[main]>worker1] elasticsearch - Encountered a retryable error (will retry with exponential backoff) {:code=>400, :url=>"http://localhost:9200/_bulk", :content_length=>137}
I can’t figure out where the problem could be, my configuration file
input {
jdbc {
# путь к драйверу БД "гуглится за 2 минуты запросом "
jdbc_driver_library => "/etc/logstash/conf.d/postgresql-42.5.0.jar"
jdbc_driver_class => "org.postgresql.Driver"
jdbc_connection_string => "jdbc:postgresql://192.168.1.40/win_base"
# лог и пароль к БД
jdbc_user => "postgres"
jdbc_password => "1234567890"
# Можно настроить расписание выполнения индексирования, но у ме>
# Я сделал обычную CRON функцию.
schedule => "/2 * * * *"
# Наш SQL запрос к БД, как видим может быть абсалютно любым.
statement => "SELECT id as id, price as price, name as name from data"
}
}
filter {
if [action_type] == "create" or [action_type] == "update" {
mutate { add_field => { "[@metadata][action]" => "index" } }
} else if [action_type] == "delete" {
mutate { add_field => { "[@metadata][action]" => "delete" } }
}
mutate {
remove_field => ["@version", "@timestamp", "action_type"]
}
}
output {
elasticsearch {
# название индекса в ES
index => "data"
document_type => "_doc"
document_id => "%{id}"
#раздел типа действия
action => "%{[@metadata][action]}"
#doc_as_upsert => true
# путь к хосту ES и лог и пароль те что мы ранее создали в kibana
hosts => "localhost:9200"
#user => "db"
#password => "MascasdaehaefwP"
}
}
At the same time, if you comment out the filter and action in the output, then the data is unloaded, but when deleting rows in the database, the rows in elasticsearch are not deleted. Thanks in advance.