Logstash won't write to ES with logstash role

I have read all some of the posts that pertain to this subject, but still couldn't make it work.

Getting this error, when trying to write some random generated data to ES

Failed action. {:status=>403, :action=>["index", {:_id=>nil, :_index=>"oganes-2016.11.09", :_type=>"logs", :_routing=>nil}, #<LogStash::Event:0x26d71e83 @metadata_accessors=#<LogStash::Util::Accessors:0x10579f4f @store={}, @lut={}>, @cancelled=false, @data={"message"=>"line 3", "@version"=>"1", "@timestamp"=>"2016-11-09T23:14:20.259Z", "host"=>"herd-blah.com", "sequence"=>2}, @metadata={}, @accessors=#<LogStash::Util::Accessors:0x2ca6fe96 @store={"message"=>"line 3", "@version"=>"1", "@timestamp"=>"2016-11-09T23:14:20.259Z", "host"=>"herd-blah.com", "sequence"=>2}, @lut={"host"=>[{"message"=>"line 3", "@version"=>"1", "@timestamp"=>"2016-11-09T23:14:20.259Z", "host"=>"herd-blah.com", "sequence"=>2}, "host"], "sequence"=>[{"message"=>"line 3", "@version"=>"1", "@timestamp"=>"2016-11-09T23:14:20.259Z", "host"=>"herd-blah.com", "sequence"=>2}, "sequence"], "type"=>[{"message"=>"line 3", "@version"=>"1", "@timestamp"=>"2016-11-09T23:14:20.259Z", "host"=>"herd-blah.com", "sequence"=>2}, "type"]}>>], :response=>{"index"=>{"_index"=>"oganes-2016.11.09", "_type"=>"logs", "_id"=>nil, "status"=>403, "error"=>{"type"=>"security_exception", "reason"=>"action [indices:admin/create] is unauthorized for user [lbviewer]"}}}, :level=>:warn}

my roles.yml

admin:
cluster:
- all
indices:
- names: '*'
privileges:
- all

The required role for logstash users

logstash:
cluster:
- manage_index_templates
indices:
- names: 'oganes*'
privileges:
- all

my role_mapping.yml

logstash:

  • "CN=lbviewer,OU=Service Accounts,OU=System Accounts,DC=corp,DC=blah,DC=com"

if I move this under admin: mapping then it works fine, but I want to be able to run it as logstash role

What am I doing wrong?

Have you checked for errors about parsing the files in the elasticsearch log file?

I doubt if the parsing error happens, since when I move the user from admin mapping to logstash, my shield trace shows that user is authenticated as [logstash]. Shouldn't parsing error be logged in my elastic data nodes?

Another test I did was with authenticate, with user in different role mappings.

curl -XGET -u lbviewer http://herd-es1:9201/_shield/authenticate
Enter host password for user 'lbviewer':
{"username":"lbviewer","roles":["admin"],"full_name":null,"email":null,"metadata":{}}

curl -XGET -u lbviewer http://herd-es1:9201/_shield/authenticate
Enter host password for user 'lbviewer':
{"username":"lbviewer","roles":["logstash"],"full_name":null,"email":null,"metadata":{}}

did you want me validate roles.yml?

I ran yml validate ruby command

ruby -e "require 'yaml';puts YAML.load_file('./roles.yml')"
logstashclustermanage_index_templatesindicesnamesoganesprivilegesallkibana4_serverclustermonitorindicesnames.kibanaprivilegesallpower_userclustermonitorindicesnamesprivilegesalltransport_clientclustertransport_clientremote_marvel_agentclustermanage_index_templatesindicesnames.marvel-es-privilegesalluserindicesnamesprivilegesreadadminclusterallindicesnamesprivilegesallmarvel_userindicesnames.marvel-es-*privilegesreadnames.kibanaprivilegesview_index_metadataread

ruby -e "require 'yaml';puts YAML.load_file('./role_mapping.yml')"
logstashCN=lbviewer,OU=Service Accounts,OU=System Accounts,DC=corp,DC=blah,DC=comkibana4_serverCN=KibanaServer,OU=Service Accounts,OU=System Accounts,DC=corp,DC=blah,DC=comadminCN=SiteReliability,OU=Roles,OU=Groups,DC=corp,DC=blah,DC=com

seems to be ok

yes a parsing error will be logged. Can you set the log level to debug and see what group DNs are returned from LDAP or AD?

So here is what I am seeing in the ES logs

[2016-11-14 19:57:50,004][DEBUG][shield.authc.support ] [herd-es1-blue] the roles [[]], are mapped from these [ldap] groups [[CN=lbviewers,OU=Security,OU=Groups,DC=corp,DC=blah,DC=com]] for realm [ldap/ldap1]
[2016-11-14 19:57:50,004][DEBUG][shield.authc.support ] [herd-es1-blue] the roles [[logstash]], are mapped from the user [ldap] for realm [CN=lbviewer,OU=Service Accounts,OU=System Accounts,DC=corp,DC=blah,DC=com/ldap]
[2016-11-14 19:57:50,014][DEBUG][shield.authc.ldap ] [herd-es1-blue] authenticated user [lbviewer], with roles [[logstash]]
[2016-11-14 19:57:50,249][DEBUG][shield.authc.ldap ] [herd-es1-blue] authenticated user [lbviewer], with roles [[logstash]]
[2016-11-14 19:57:50,377][DEBUG][shield.authc.ldap ] [herd-es1-blue] authenticated user [lbviewer], with roles [[logstash]]
[2016-11-14 19:57:50,409][DEBUG][shield.authc.ldap ] [herd-es1-blue] authenticated user [lbviewer], with roles [[logstash]]
[2016-11-14 19:57:50,674][DEBUG][shield.authc.ldap ] [herd-es1-blue] authenticated user [lbviewer], with roles [[logstash]]

Looks like the group is assigned empty role, but in my case I am using a user that's part of the group and it's roles are properly assigned. Does the group override the user?

That looks good to me. Do you have the role on all of the nodes? Is it the same everywhere? Are you using any roles created via the API or only file based roles?

It should be, at least on the data nodes. I will check the Master nodes and query nodes.
I have assigned native realm order of 1, but based on what users and roles I have setup, non of that applies and won't be in conflict:

This is what initially setup when I was messing with local access.

curl -XGET -u ***** 'http://localhost:9201/_shield/role?pretty'

{
"kibana4_server_role" : {
"cluster" : [ "all" ],
"indices" : [ {
"names" : [ ".kibana*" ],
"privileges" : [ "all" ]
} ],
"run_as" : [ ]
},
"kibana4_server" : {
"cluster" : [ "all" ],
"indices" : [ {
"names" : [ "*" ],
"privileges" : [ "indices:data/read/search" ]
} ],
"run_as" : [ "test" ]
}

curl -XGET -u **** 'http://localhost:9201/_shield/user?pretty'

{
"kibana-server" : {
"username" : "kibana-server",
"roles" : [ "kibana4_server" ],
"full_name" : null,
"email" : null,
"metadata" : { }
},
"kibana_server" : {
"username" : "kibana_server",
"roles" : [ "kibana4_server" ],
"full_name" : null,
"email" : null,
"metadata" : { }
}
}

WORKED!!! So my Master nodes were missing update roles.yml and roles_mapping.yml file
Now logstash can create the index. Thanks alot for helping and pointing me to the right direction.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.