Elasticsearch giving creating index, cause [auto(bulk api)]

I am facing issues that old indices are getting created automatically, as I am using fluentd as a log shipper to Elasticsearch. I am facing this issue since 2 days, dig very much for this but no result till now. I delete old indices but again it gets created, even I deleted fluentd also but after deploying it again it creates old indices again. Here are the following info -

Elasticsearch version - 7.8.1 and 7.15.0
Fluentd version - 1.10.4

I am deploying fluentd as a daemonset into AWS EKS.

Welcome to our community! :smiley:

Please don't post pictures of text or code. They are difficult to read, impossible to search and replicate (if it's code), and some people may not be even able to see them :slight_smile:

Ok I will take care for next time.

@warkolm , can you please help me in this issue.

I can't read that image, so no sorry.

I am sharing the logs here -

{"type": "server", "timestamp": "2021-10-13T01:58:11,332Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "k8s-logs", "node.name": "es-cluster-1", "message": "[logstash-2021.09.13] creating index, cause [auto(bulk api)], templates [qa-es-lifecycle-template], shards [1]/[1], mappings [_doc]", "cluster.uuid": "Sy35VNmeQ_mNZZIDq7NKtg", "node.id": "BdPqGKWiTh6MNkpnXtE9FQ" }

{"type": "server", "timestamp": "2021-10-13T01:59:24,658Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "k8s-logs", "node.name": "es-cluster-1", "message": "[logstash-2021.09.14] creating index, cause [auto(bulk api)], templates [qa-es-lifecycle-template], shards [1]/[1], mappings [_doc]", "cluster.uuid": "Sy35VNmeQ_mNZZIDq7NKtg", "node.id": "BdPqGKWiTh6MNkpnXtE9FQ" }

{"type": "server", "timestamp": "2021-10-13T02:08:41,299Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "k8s-logs", "node.name": "es-cluster-1", "message": "[logstash-2021.09.15] creating index, cause [auto(bulk api)], templates [qa-es-lifecycle-template], shards [1]/[1], mappings [_doc]", "cluster.uuid": "Sy35VNmeQ_mNZZIDq7NKtg", "node.id": "BdPqGKWiTh6MNkpnXtE9FQ" }

{"type": "server", "timestamp": "2021-10-13T02:14:36,146Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "k8s-logs", "node.name": "es-cluster-1", "message": "[logstash-2021.09.16] creating index, cause [auto(bulk api)], templates [qa-es-lifecycle-template], shards [1]/[1], mappings [_doc]", "cluster.uuid": "Sy35VNmeQ_mNZZIDq7NKtg", "node.id": "BdPqGKWiTh6MNkpnXtE9FQ" }

{"type": "server", "timestamp": "2021-10-13T02:20:37,327Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "k8s-logs", "node.name": "es-cluster-1", "message": "[logstash-2021.09.17] creating index, cause [auto(bulk api)], templates [qa-es-lifecycle-template], shards [1]/[1], mappings [_doc]", "cluster.uuid": "Sy35VNmeQ_mNZZIDq7NKtg", "node.id": "BdPqGKWiTh6MNkpnXtE9FQ" }

@warkolm , can you help me now as I have shared proper logs here.

I'm not super familiar with fluentd, but I believe that it uses a timestamp to figure out which index to send to. I guess it must be receiving data that has that older timestamp, which is why these indices are being created.

How I can solved this to not get older timestamp logs here?

That would be a fluentd question I think.