Hi there,
we are using logstash as data pipeline to transfer data from a SQL Server database to Elasticsearch. the pipelines are configured as memory queued and sometimes after the data transfer completes, it's incompleted in elasticsearch side meaning like not all of data has been transferred. As I checked the dashboards in data read period time, the logstash server RAM seems having some drops and that moment the pipeline events_in rate also decreased. does a GC operation or something else causes the memory to be cleared out so leads to data loss when reading data in memory queue mode?
No, a GC cycle just cleans up garbage, it does not delete memory that is still in use by logstash. A brief drop in the event processing rate at the same time as a large drop in the heap in use does suggest a full GC, but that's normal.