Logstash JDBC w/Elasticsearch: Managing space?

So I was unsure if this was really a logstash or elasticsearch question:

Basically I have JDBC logstash plugin sending to a AWS elasticsearch. Right now I have NOT set the document_id so it's not getting over-written.

So everytime the SELECT SQL statement runs...is it just inserting a whole set of new documents? I guess how do we go about managing the data since we have a lot of sensor data in our project thats going to just keep getting written in.

Since elasticsearch/logstash don't know about "Delete's" im wondering how people handle the huge influxes of data. Should I be setting document_id and allowing it to over-write? That just seems like it doesn't make as much sense.

Setting doc ID would make sense.
As would using time based indices.

If I set doc ID i'd be just refreshing the one document correct? I guess i wouldn't be able to see things over-time, especially if sensors are removed for example if my one worry?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.