Hi,
I have seen in other topics and I have also verified in first person that Logstash does not recognize json object from the jdbc input and an error related to the PGobject comes out, like this:
Exception when executing JDBC query {:exception=>#<Sequel::DatabaseError: Java::OrgLogstash::MissingConverterException: Missing Converter handling for full class name=org.postgresql.util.PGobject, simple name=PGobject>}
So I tried to work around the problem by casting the json object in text format, so i have this configuration pipeline:
file.conf
input {
jdbc {
# Postgres jdbc connection string to my database
# The user we wish to execute our statement as
# The path to my downloaded jdbc driver
# The name of the driver class for Postgresql
# password
# my query
statement => "SELECT document::text from snapshots"
schedule => "****"
}
}
filter{
json{
source => "document"
remove_field => ["document"]
}
}
output {
stdout { codec => json_lines }
elasticsearch {
index => "snapshots"
document_id => "%{uid}"
hosts => ["localhost"]
}
}
The pipeline runs, but I noticed that on kibana I can only see the last json document that is fished from the query as if the others were overwritten.
I would like to have on kibana all the documents that are saved in postgres within the document column.
The structure of the json is very complex and nested, in fact there are about 450 fields.
How can I solve this problem?