Elasticsearch Indexing Not Working Periodically

Hello;

I am very new to working with Elasticsearch, but I've come up against an error that the vendor of our software (bundled with Elasticsearch) is not able to solve or find a root cause of. Unfortunately I am not seeing any failures in the Elasticsearch logs, all I see is that the indexes are created and then deleted (as seen below). Is anyone able to point me in a good direction for how I can troubleshoot this issue further? Thanks in advance.

[2020-08-30T00:00:06,635][INFO ][org.elasticsearch.cluster.metadata.MetaDataCreateIndexService] [A0rqguY] [container_tenant1_1598760006627] creating index, cause [api], templates [], shards [5]/[1], mappings []
[2020-08-30T00:00:15,033][INFO ][org.elasticsearch.cluster.metadata.MetaDataCreateIndexService] [A0rqguY] [project_tenant1_1598760015029] creating index, cause [api], templates [], shards [5]/[1], mappings []
[2020-08-30T00:01:08,997][INFO ][org.elasticsearch.cluster.metadata.MetaDataDeleteIndexService] [A0rqguY] [container_tenant1_1598760006627/p5-69Lx6T4mGX3jXK2pTdg] deleting index
[2020-08-30T00:01:09,048][INFO ][org.elasticsearch.cluster.metadata.MetaDataCreateIndexService] [A0rqguY] [step_tenant1_1598760069044] creating index, cause [api], templates [], shards [5]/[1], mappings []
[2020-08-30T00:01:17,374][INFO ][org.elasticsearch.cluster.metadata.MetaDataDeleteIndexService] [A0rqguY] [project_tenant1_1598760015029/oAcE_XuDSuO9Pz4cuKynqw] deleting index
[2020-08-30T00:02:11,378][INFO ][org.elasticsearch.cluster.metadata.MetaDataDeleteIndexService] [A0rqguY] [step_tenant1_1598760069044/b5Q0dmFBSV-QSpuIWRhWFQ] deleting index
[2020-08-30T00:15:00,050][INFO ][org.elasticsearch.cluster.metadata.MetaDataCreateIndexService] [A0rqguY] [sample_tenant1_1598760900046] creating index, cause [api], templates [], shards [5]/[1], mappings [sample]
[2020-08-30T00:16:02,427][INFO ][org.elasticsearch.cluster.metadata.MetaDataDeleteIndexService] [A0rqguY] [sample_tenant1_1598760900046/Tc4ZWn02RWaX5WaDPP3Vag] deleting index

Welcome to our community! :smiley:

What version? What do you mean by "not working"?

1 Like

It looks like you are creating a lot of indices and shards every day. How many indices and shards do you have in the cluster? What is the average size of a shard? What is the d sad use and specification of the cluster?

I would also recommend you read this blog post about shards and sharing guidelines.

1 Like

More or less, the index gets deleted after around 1h.

I'd assume that your software is doing that itself.
Or may be your elasticsearch cluster is not secured and someone/something is sending the delete index api request ?

1 Like

It is version 6.2.4. I am new to working with Elasticsearch and we have a vendor who supports the application, so I am kind of piece-mealing the information and logs and trying to figure this out as I go. In the Elasticsearch logs I see that the indexes are are created and then deleted, but in the application Tomcat logs, I see that indexing failed. The bigger issue here is that we don't have any more verbose logs from the vendor, but they have to work through their dev process to get that logging setup, which is taking too long (our application depends on Elasticsearch and it is failing too frequently). From what I see, Elasticsearch is behaving as it should by creating and deleting indexes at the request of the application, but I am trying to eliminate Elasticsearch as any point of issue. With there being no errors in the Elasticsearch logs, I am trying to determine if that's enough to exclude Elasticsearch from the issue investigation. Thanks again.

What is the full output of the cluster stats API?

1 Like
           "name": "A0rqguY",
            "os": {
                "cgroup": {
                    "cpu": {
                        "cfs_period_micros": 100000,
                        "cfs_quota_micros": -1,
                        "control_group": "/",
                        "stat": {
                            "number_of_elapsed_periods": 0,
                            "number_of_times_throttled": 0,
                            "time_throttled_nanos": 0
                        }
                    },
                    "cpuacct": {
                        "control_group": "/",
                        "usage_nanos": 294566464580238
                    },
                    "memory": {
                        "control_group": "/",
                        "limit_in_bytes": "9223372036854771712",
                        "usage_in_bytes": "22911746048"
                    }
                },
                "cpu": {
                    "load_average": {
                        "15m": 0.91,
                        "1m": 1.0,
                        "5m": 0.98
                    },
                    "percent": 2
                },
                "mem": {
                    "free_in_bytes": 41097691136,
                    "free_percent": 61,
                    "total_in_bytes": 67558256640,
                    "used_in_bytes": 26460565504,
                    "used_percent": 39
                },
                "swap": {
                    "free_in_bytes": 0,
                    "total_in_bytes": 0,
                    "used_in_bytes": 0
                },
                "timestamp": 1598986490424
            },
            "process": {
                "cpu": {
                    "percent": 0,
                    "total_in_millis": 11148700
                },
                "max_file_descriptors": 65536,
                "mem": {
                    "total_virtual_in_bytes": 15105900544
                },
                "open_file_descriptors": 626,
                "timestamp": 1598986490424
            },
            "roles": [
                "master",
                "data",
                "ingest"
            ],
            "script": {
                "cache_evictions": 0,
                "compilations": 0
            },
            "thread_pool": {
                "bulk": {
                    "active": 0,
                    "completed": 649821,
                    "largest": 16,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 16
                },
                "fetch_shard_started": {
                    "active": 0,
                    "completed": 20,
                    "largest": 20,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 1
                },
                "fetch_shard_store": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "flush": {
                    "active": 0,
                    "completed": 2015,
                    "largest": 5,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 1
                },
                "force_merge": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "generic": {
                    "active": 0,
                    "completed": 35878,
                    "largest": 4,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 4
                },
                "get": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "index": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "listener": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "management": {
                    "active": 1,
                    "completed": 19969,
                    "largest": 3,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 3
                },
                "refresh": {
                    "active": 0,
                    "completed": 602785,
                    "largest": 3,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 3
                },
                "search": {
                    "active": 0,
                    "completed": 33203,
                    "largest": 25,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 25
                },
                "snapshot": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "warmer": {
                    "active": 0,
                    "completed": 1594373,
                    "largest": 5,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 5
                }
            },
            "timestamp": 1598986490372,
            "transport": {
                "rx_count": 0,
                "rx_size_in_bytes": 0,
                "server_open": 0,
                "tx_count": 0,
                "tx_size_in_bytes": 0
            },
            "transport_address": "127.0.0.1:9300"
        }
    }
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.