So I was unsure if this was really a logstash or elasticsearch question:
Basically I have JDBC logstash plugin sending to a AWS elasticsearch. Right now I have NOT set the document_id so it's not getting over-written.
So everytime the SELECT SQL statement runs...is it just inserting a whole set of new documents? I guess how do we go about managing the data since we have a lot of sensor data in our project thats going to just keep getting written in.
Since elasticsearch/logstash don't know about "Delete's" im wondering how people handle the huge influxes of data. Should I be setting document_id and allowing it to over-write? That just seems like it doesn't make as much sense.