Auto Update logs without running logstash

I have logs stored in a MS SQL table and i have migrated all logs by running logstash once. Now what i want is if i update a particular record in SQL table that should be reflected in Kibana visualizations without restarting kibana or logstash, just by turning auto refresh on. Is it possible if yes how can i do that.

It's not possible using Logstash at this stage sorry, it only looks for new data that has been added.

If you have a "last modified" column in the database you can use that to identify rows that have changed since the last run. Combined with the jdbc input's schedule feature you can get near-realtime updates of the ES index.

1 Like

Added scheduler in logstash config file but still having the same problem!!! index is not modifying automatically i have to delete it and recreate it then i get desired no. of docs
here is my config file.

input {
jdbc {
jdbc_driver_library => "C:\folder\Java\jre1.8.0_144\bin\sqljdbc42.jar"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_connection_string => "jdbc:sqlserver://myserver"
jdbc_user => "username"
jdbc_password => "password"
schedule => "* * * * *"
statement => "SELECT * FROM Logs where Status != 2

}
}

output {
elasticsearch {
hosts => ["localhost:9200"]
index => "index_name"
document_type => "log"
document_id => "%{log_oid}"

}

}

Still having the same problem!!!

And the answer I gave you still applies.

I have a column named "Staus" in DB table and I am only fetching records which are not resolved, plus I have added jdbc's schedule as well it runs query every minute.

It sounds like Logstash has no way of knowing what has changed in the last minute so you'll either have to pull all rows every minute or implement some monotonically increasing column value (like a "last modified" timestamp) that Logstash can use to pull only unprocessed rows.

I am actually fetching all rows see query
SELECT * FROM Logs where Status != 2

if i change status for some rows they are still at index

The jdbc input fetches existing rows. It's not able discover what rows have been removed from the result set and therefore should be deleted from ES. That kind of synchronization requires some extra work.

1 Like

like if I update Status = 2 of some rows and have scheduled logstash to run below query
SELECT * FROM Logs where Status != 2
logstash executes query but in kibana i don't get it reflected

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.