I designed a pipeline with elasticsearch as a input plugin and s3 as a output plugin. In my case, pipeline started successfully and push the data on s3 bucket but in case of termination or crash it does not start from where pipeline ended last time. What is happening? let suppose pipeline has been started and data size was 8GB and pipeline has been pushed data around 6GB on S3 bucket then suddenly pipeline terminated with any reason and restart pipeline again then data start from first position, 6GB is already there now again same data come then make duplicity on s3 bucket. Any way to resume pipeline? Pipeline already pushed data around 6GB and when we start again after crash it should push data remaining 2GB.
There is an open issue requesting the elasticsearch input provide a way to maintain state (as the jdbc input does) but it has not been implemented.
@Badger Thanks for your response.