Status=403): {"type":"security_exception","reason":"action [indices:data/write/bulk[s]


It took me the whole day and I still cant work it out.
Please see below what is wrong

My Kibana user setting:


  path: ${path.config}/modules.d/*.yml
  reload.enabled: false "metricbeat"
setup.template.pattern: "metricbeat-*"
  index.number_of_shards: 1
  index.codec: best_compression
  host: "xxxxxxx:5601"
setup.ilm.check_exists: false
  hosts: ["xxxxxxx:9200"]
  index: "metricbeat-%{[agent.version]}-%{+yyyy.MM.dd}"
  protocol: "https"
  username: "metricbeat_writer"
  password: "xxxxxxxxxxxxxx"

  ssl.certificate_authorities: ["/etc/logstash/elasticsearch-ca.pem"]
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
monitoring.enabled: true


Jul 8 16:26:45 ls02 metricbeat: 2020-07-08T16:26:45.021+0800#011WARN#011elasticsearch/client.go:266#011monitoring bulk item insert failed (i=0, status=403): {"type":"security_exception","reason":"action [indices:data/write/bulk[s]] is unauthorized for user [metricbeat_writer]"}

For what I have tested, setting monitoring.enabled: true to false would work, but what I need have to be true.

Please help! Thanks.

Setting index privilege from "metricbeat-" to "" would also work, but this probably not a proper solution

I have the same problem when attempting to write to a filebeat index with logstash. We are running 7.8 across the board here. Our index privileges are view_index_metadata, create_index, create_doc, index. I tried adding the cluster privilages cluster:admin/ingest/pipeline/get as well but that doesn't change the outcome.

I have been looking at this article yesterday to troubleshoot this.

It does appear to be a permissions issue because when you give it basically all permissions to an index it doesn't error out. However that seems a bit risky.

I tried to set the index permissions to "all" but the security_exception still exist.

I have a support case opened and we are working to diagnose the issue. As of right now when I disable the service and reboot the server and start it again its able to send logs for a while before getting bogged down with errors. Can you try this to see if you see the same behavior?

The only cases I am able to send log for me are as mentioned:

  1. Setting index privilege from "metricbeat-" to "" would work
  2. setting monitoring.enabled: true to false would work, but what I need have to be true.

For your suggestion, which server have to be rebooted? However, I doubt that reboot is not the solution since it is clearly the security issue, which I think it is the application level bug.

Any help from Elastic Team pls?

@mmk1995 My problem was logstash was reading a variable to specify the index it wrote too, and when an event came with inconsistent data in that field it tried to create a new index. This was identified with assigning the account SuperUser privileges and identifying a suspiciously new index appear.

One other thing, when using logstash if you have multiple pipeline configs ensure that your input filters and outputs are using if statements control what events flow through them. When logstash starts up it complies a single configuration from each .conf file in the conf.d directory.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.