Security setup on existing cluster failing

Hi All,

Since the latest versions of Elasticsearch support x-pack security by default I wanted to set it up.
I have a 2 node cluster (just for some home use) setup on Ubuntu 16.04 with the elastic repo's.

I followed the guide from the Elastic blog (

Only difference is that I use 2 separate nodes. So not on the same machine.
Starting the cluster works fine and status turns to green.
[2019-05-31T14:28:36,164][INFO ][o.e.c.r.a.AllocationService] [es0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_task_manager][0]] ...]).

But when trying to create the users I get the following error.

[2019-05-31T14:28:40,172][INFO ][o.e.x.s.s.SecurityIndexManager] [es0] security index does not exist. Creating [.security-7] with alias [.security] [2019-05-31T14:28:40,225][INFO ][o.e.c.m.MetaDataCreateIndexService] [es0] [.security-7] creating index, cause [api], templates [default], shards [1]/[0], mappings [_doc] [2019-05-31T14:28:40,227][INFO ][o.e.c.r.a.AllocationService] [es0] updating number_of_replicas to [1] for indices [.security-7] [2019-05-31T14:28:40,479][INFO ][o.e.x.s.s.SecurityIndexManager] [es0] Missing _meta field in mapping [_doc] of index [.security] [2019-05-31T14:28:40,480][WARN ][o.e.c.s.ClusterApplierService] [es0] failed to notify ClusterStateListener java.lang.IllegalStateException: Cannot read security-version string in index .security

The users are created in the index but authentication fails.

I setup a new fresh 2 node cluster on digital ocean droplets with the same specs as above (ubuntu 16.04 with elastic repo) following the same guide as above and it worked flawlessly. When comparing the .security-7 indexes a lot of mappings are missing on my failing one.

On the problem cluster I started with version 6.4 a while ago and upgraded the cluster ever since. Could it be that there are some dangling template files for the creation of the security index which are causing this and if so where can I prune those?

Probably the workaround is to create a temporary node to join the cluster, setup the users via that node and if it works remove this node from cluster and work from there.
But I might have the same issue when upgrading so I'd rather fix it correctly.

Thanks in advance and if more info is needed let me know.


The fix I thought that might work with a temporary node had the same issue. I'm probably missing something or made a specific change which causes this behaviour. Hopefully someone can point me in the right direction of where to look.

Do you have a template called default?

Yes, with the following contents.

"order": -1,
"index_patterns": [
"settings": {
"index": {
"number_of_shards": "1",
"number_of_replicas": "1"
"mappings": {},
"aliases": {}

Hi @jpcarey,

Thanks for the hint!
Since there was nothing useful in de default template, (I already moved all of the settings to other templates a month ago) I deleted the default template and performed the actions again.
Now it's working like a charm.

Just a small thing and you can just miss it.

Thanks again!


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.