Elastic, Xpack, users cannot be retrieved as native user service has not been started

I have been playing around with ES for a while now. It got a stage where the company wants to see a demo of the monitoring / security functions that ES offer.

I have been using a version of the Ansible playbook, with some modifications to allow version 5 of the software. This all works fine, until I install Xpack.

So the problem I have is the playbook always fails on " List Native Users"

fatal: [54.77.239.235]: FAILED! => {"changed": false, "content": "
{\"error\":{\"root_cause\":
[{\"type\":\"illegal_state_exception\",\"reason\":\"users cannot be 
retrieved as native user service has not been started\"}],\"type\":\"illegal_state_exception\",\"reason\":\"users cannot be retrieved as native user service has not been started\"},\"status\":500}", "content_length": "269", "content_type": "application/json; charset=UTF-8", "failed": true, "json": {"error": {"reason": "users cannot be retrieved as native user service has not been started", "root_cause": [{"reason": "users cannot be retrieved as native user service has not been started", "type": "illegal_state_exception"}], "type": "illegal_state_exception"}, "status": 500}, "msg": "Status code was not [200]: HTTP Error 500: Internal Server Error", "redirected": false, "status": 500, "url": "http://localhost:9200/_xpack/security/user"}

I can't seem to find any information on this error, is anyone able to help me out? Also on the 2nd run of the playbook, the default user stopped working and I had to use one that was created in the "vars" section, I assumed this was intentional but I could not see any task that made that user inactive.

Some more information:

es_enable_xpack: true
es_xpack_features: ["alerting", "security"]
# Experimental below
es_api_basic_auth_username: es_admin
es_api_basic_auth_password: changeMe
es_role_mapping:
  power_user:
    - "cn=admins,dc=example,dc=com"
  user:
    - "cn=users,dc=example,dc=com"
    - "cn=admins,dc=example,dc=com"
es_users:
  native:
    kibana4_server:
      password: changeMe
      roles:
        - kibana4_server
  file:
    es_admin:
      password: changeMe
      roles:
        - admin
    testUser:
      password: changeMeAlso!
      roles:
        - power_user
        - user
es_roles:
  file:
    admin:
      cluster:
        - all
      indices:
        - names: '*'
          privileges:
            - all
    power_user:
      cluster:
        - monitor
      indices:
        - names: '*'
          privileges:
            - all
    user:
      indices:
        - names: '*'
          privileges:
            - read
    kibana4_server:
      cluster:
          - monitor
      indices:
        - names: '.kibana'
          privileges:
            - all
  native:
    logstash:
      cluster:
        - manage_index_templates
      indices:
        - names: 'logstash-*'
          privileges:
            - write
            - delete
            - create_index

This error usually means that the service hasn't been started because it is waiting for a master to be elected, or the .security index is red and hasn't been recovered yet. Can you check whether your cluster is in a healthy state in the logs?

That makes sense,

This is freshly installed on 3 nodes so here is the "master"

root@ip-172-31-31-181:~# curl -u es_admin:trololol -XGET 'localhost:9200/_cluster/health?pretty'
{
  "cluster_name" : "actual-cluster",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 0,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 7,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 0.0
}

The logs are not much use as all I get is the error we already know about.

Perhaps it could be my misunderstanding of Xpack, do I need to make the role mappings / users on each elastic node or only the master?

This appears to be a single, dedicated master node that is not part of a cluster. As it can not hold data, no indices can be created. Make sure that you have data nodes in your cluster and that they can connect to each other.

Aha, I have now connected my nodes and it's turned green. I was just working on one node at a time, I didn't even think that the cluster is required.

I think I am now up and running.

root@ip-172-31-31-181:~# curl -u es_admin:otrolol -XGET 'localhost:9200/_cluster/health?pretty'{
  "cluster_name" : "actual-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 4,
  "active_shards" : 8,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

Time to break Logstash :smiley:

Hrmm, any reason why this admin user doesn't have access to read / write?

Config: Error 403 Forbidden: [security_exception] action [indices:data/write/update[s]] is unauthorized for user [es_admin]

es_role_mapping:
      power_user:
        - "cn=admins,dc=example,dc=com"
      user:
        - "cn=users,dc=example,dc=com"
        - "cn=admins,dc=example,dc=com"
    es_users:
      native:
        kibana4_server:
          password: changeMe
          roles:
            - kibana4_server
      file:
        es_admin:
          password: changeMe
          roles:
            - admin
        testUser:
          password: changeMeAlso!
          roles:
            - power_user
            - user
    es_roles:
      file:
        admin:
          cluster:
            - all
          indices:
            - names: '*'
              privileges:
                - all
        power_user:
          cluster:
            - monitor
          indices:
            - names: '*'
              privileges:
                - all
        user:
          indices:
            - names: '*'
              privileges:
                - read
        kibana4_server:
          cluster:
              - monitor
          indices:
            - names: '.kibana'
              privileges:
                - all
      native:
        logstash:
          cluster:
            - manage_index_templates
          indices:
            - names: 'logstash-*'
              privileges:
                - write
                - delete
                - create_index

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.