Custom index name results in temporary bulk send failure

Hi,

we would like to use filebeats for our kubernetes installation to indexing our container logfiles.

as there are many containers running per host we would like to have sepereate indexes for every application.

so we setup our filebeat with the option

output.elasticsearch:
index: k8s-%{[kubernetes.namespace]}-%{[kubernetes.container.name]}-%{+yyyy.MM.dd}

without that setting everything works fine, but after setting the custom index nothing is indexed and we are seeing the following error in our logs

2018-07-31T11:37:11.901Z	ERROR	pipeline/output.go:92	Failed to publish events: temporary bulk send failure

no errors on the elasticsearch side.

any ideas what could have gone wrong here?

Elasticsearch requires index names to be lower case. Could that be the problem?

unfortunately not .... our pod names + namespaces are lower case ase well

i checked the indexes at es and the indexes got created ... there are also some documents in those indizes .... but at some point it seems that filebeat stops indexing ... there are far to less documents indexed and no new documents are getting indexed

i did an additional tests with a hardcoded index

index: k8s-ttt-%{+yyyy.MM.dd}

that works without any problems

maybe it is an issue indexing documents in batch that are stored in different es indexes?

Well, I am not sure field references work where you are trying to use them. Creating daily indices per namespace and container risk generating a lot of very small shards, which as outlined in this blog post is very inefficient and is likely to cause you problems down the line. Even if it worked I would therefore recommend against doing that.

its not an index per container + namespace ... its an index per container-name + namespace - which is basically an index per application (not per running container instance)

See how many shards this will generate and consider reducing the number of primary shards and/or the length of time each index covers to avoid getting too many small shards in your cluster.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.