Data used to get loaded to Elastic search without any issue but from last 4 weeks onward we can see reduction in Data when checked in Kibana Dashboard.
Later to find out where the Data loss is happening we checked in Database for Data volume and there is no much difference in Data Volume.
We checked the traffic from Logstash to Oracle DB and there is no issue with that as well.
We have not seen any changes in the Pipelines or Configs or Database but still there is a reduction in Data volume.
Could you please let us know, how to find out where the data loss is occurring in Elastic-stack.
Either it is from Database to Logstash
or From Logstash to Elastic-search
What do you mean by data loss ?
Was the index removed? Only some documents?
Could you share your full elasticsearch logs?
Also. Are you running it on cloud, private network, as a service on cloud.elastic.co?
Is it accessible on internet? Which version?
Did you secure it so no one can access to the service? (I'd recommend using cloud.elastic.co to have a proper secured service).
Yes, As this Service was set up by the another team and we are supporting it.
This is toooo old. At least upgrade to 6.8 or better 7.7.1.
Yes, It is being Planned to upgrade to 6.8 but it may take some time. But we need the solution for the Dataloss. Could you please help us to get the possible ways to validate or Verify that there is no data loss
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.