Cluster flooded with random date index

Hello,

I've been deploying winlogbeat on many servers at the same time lately and it seem like one or more of them if provoking a bug that create lot of indexs with differents date stamp (my index defined as winlogbeat-%{+dd.MM.yyyy}).

Right now i have like 700 indexs that have been wrongfully created (whoever crate thoeses index like 6mo of docs in them or sometimes 1mo but doesn't use theses indexs as a main index to write in)

I had the issue previously in a another wave of deployement and i had managed to find the culprit by shutting down my winlogbeat services one by one until only one server remain. It was a 2008 R2 and when i connected to it there was no issues with the date.

But now i have way too many winlogbeat agents deployed to allow myself to do that again. I was wondering if there was a way to know who is writing to which index ?

I was thinking looking at discover, watch the index tag to find the server but only my current index shows : https://i.imgur.com/JgEg4Or.png

Any ideas how i could find the problem ?

Why are these random, they appear valid. It might be because the beats are processing old data and that's why you have data from past dates.

You should really look at ILM - https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-settings.html - you have very small indices and you are wasting resources having daily indexes like that.

I say random because the datestamp doesn't follow, days are missign between each other so i don't know how some beat came up with theses dates. Could be old data but with winlogbeat-%{+dd.MM.yyyy}) doesn't the date get pulled from the server date and not log date ?

My winlogbeat index aren't suppose to be this small. My real daily index reach 10Gb a day and i expect it to blow to 40 when i add my remaning win server. The indexs you see on the screenshot are this small because whoever created them wrote a few document in it and then created another index in the next second and did the same.

Also i can't use ILM right now because of my problem with template overwriting themselves.

I found a solution to my problem.

I passed the following curl query who disable auto index creation, except for indexs with ".monitoring-*" pattern.

PUT _cluster/settings 
{  "persistent": {    
"action.auto_create_index" : ".monitoring-*" 
}
}

This way, whoever was that server who kept creating "fake" index or whatever reason he was doing that he can't add index anymore, only on the write index that is the good index.

The server might still be trying to create index as i speak but it's useless, he can't.

Turn out that disabling auto index creation is a recommendation from elasticsearch in a production cluster. I was reading theses recommendations and that's how i found this solution.

.monitoring indices are system created ones by the Monitoring functionality, you probably don't want to disable the creation of them like that.

If you don't want Monitoring running then you should disable it.

Hi,

"action.auto_create_index" : ".monitoring-*" 

Means to disable auto-index creation for all, except indexs that follow the pattern .monitoring-*

So my monitoring indexs are going to keep doing what they do like usual.

Ahh yeah sorry, I totally misread that :frowning:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.