Insert complex nested json documents from Postgres to Elasticsearch via Logstash

I have seen in other topics and I have also verified in first person that Logstash does not recognize json object from the jdbc input and an error related to the PGobject comes out, like this:

Exception when executing JDBC query {:exception=>#<Sequel::DatabaseError: Java::OrgLogstash::MissingConverterException: Missing Converter handling for full class name=org.postgresql.util.PGobject, simple name=PGobject>}

So I tried to work around the problem by casting the json object in text format, so i have this configuration pipeline:


input {
jdbc {
# Postgres jdbc connection string to my database
# The user we wish to execute our statement as
# The path to my downloaded jdbc driver
# The name of the driver class for Postgresql
# password
# my query
statement => "SELECT document::text from snapshots"
schedule => "****"

source => "document"
remove_field => ["document"]

output {
stdout { codec => json_lines }
elasticsearch {
index => "snapshots"
document_id => "%{uid}"
hosts => ["localhost"]

The pipeline runs, but I noticed that on kibana I can only see the last json document that is fished from the query as if the others were overwritten.

I would like to have on kibana all the documents that are saved in postgres within the document column.
The structure of the json is very complex and nested, in fact there are about 450 fields.
How can I solve this problem?

Is uid a primary key?

yes, it is! I followed this guide to get started:

where they put a document id so I also put it in my case.
Perhaps it is not necessary, but this has not caused me problems for the moment.

Remove the document_id and re run it. It could be possible that uid is same hence it is being overwritten

Thank you @Divit_Sharma!! That was the problem, the uid was about the tutorial data and not about my data.
I just changed with my id data and now it works.