Unauthorized for user [_system] error during bulk index

After I upgraded ES stack to 5.2.1-1, I'm getting a lot of bulk index errors around 1 minute into the indexing process.

Logs:

2017-02-21 09:37:56,293 Push failed with error (u'3 document(s) failed to index.', [{u'index': {u'status': 403, u'_type': u'cloudtrail', u'_id': u'12c76afa-09f5-443f-b636-cd0024d452c1', u'error': {u'reason': u'action [indices:data/write/bulk[s][p]] is unauthorized for user [_system]', u'type': u'security_exception'}, u'_index': u'logstash-2016-12-10'}}, {u'index': {u'status': 403, u'_type': u'cloudtrail', u'_id': u'0545239b-cdf5-4efd-ba43-8c39f16fadf0', u'error': {u'reason': u'action [indices:data/write/bulk[s][p]] is unauthorized for user [_system]', u'type': u'security_exception'}, u'_index': u'logstash-2016-12-10'}}, {u'index': {u'status': 403, u'_type': u'cloudtrail', u'_id': u'8d55abd9-891f-454c-abbb-6c6da36a293b', u'error': {u'reason': u'action [indices:data/write/bulk[s][p]] is unauthorized for user [_system]', u'type': u'security_exception'}, u'_index': u'logstash-2016-12-10'}}])

Basically I'm using pyelasticsearch to push AWS cloudtrail logs to all the nodes in the cluster, this one in particular has 8 nodes (1 master). This happens pretty much to all the nodes randomly around 1 minute after the indexing starts and it happens when the bulk index slows down for example, when the request took around 10 seconds to respond instead of 0.05 second.
Failed docs are all from the same index as you can see from the logs above 'logstash-2016-12-10'. I'm not really using Logstash to push the logs, just keeping the naming convention. After a while, around 30 seconds this behavior is gone and the rest of indexing goes smoothly.

Can you tell us

  • which version of elasticsearch you upgraded from ?
  • approximately how many nodes in your cluster
  • how many shards and replicas you have on your logstash index

You can determine the last part with:

GET /logstash-2016-12-10/_settings?pretty'

Thanks.

Another useful piece of information would be the stacktrace (if any) from the logs of the cluster.

@CharlesZ we just released 5.2.2 https://www.elastic.co/blog/elasticsearch-5-2-2-released and one of the fixes in this release should resolve the issue you're seeing.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.