Shield AD user is unauthorized only from a different subnet

Apologies if this should be posted in a different location. If so, I'm happy to move it.

I have a four node cluster: 3 data nodes and one dedicated master, with configs controlled by puppet. Prior to installing shield, logstash servers, 01 one on the same subnet as the cluster and 02 on a different subnet, where both logging data into ES fine. Post Shield and AD auth install, 02 started receiving "[indices:data/write/bulk] is unauthorized for user".

I've spent hours trying to find similar issues, debugging, etc.

Scenarios:

  1. 01 and 02 on same subnet as ES cluster.
  2. 02 on a different subnet.
  3. 01 using 02's credentials.

Symptoms:

  1. both users authenticate and are allowed logstash roles.
  2. Both users authenticate successfully but 02 is flagged unauthorized.
  3. user authenticates and is allowed logstash roles.

role_mapping.yml logstash entry:
logstash:

  • "CN=s_logstash01v,OU=ServiceAccounts,OU=Employees,DC=domain,DC=local"
  • "CN=s_logstash02v,OU=ServiceAccounts,OU=Employees,DC=domain,DC=local"

Log snippet:
[2016-06-26 15:46:53,543][DEBUG][shield.authc.activedirectory] [node-01] authenticated user [s_logstash01v], with roles [[logstash, Desktop Admins, Domain Users, PentahoRO, Users]]
[2016-06-26 15:46:52,724][DEBUG][shield.authc.activedirectory] [node-01] authenticated user [s_logstash02v], with roles [[logstash, Desktop Admins, Domain Users, PentahoRO, Users]]
[2016-06-26 15:39:52,755][DEBUG][shield.authc.support ] [node-01] the roles [[logstash]], are mapped from the user [active_directory] for realm [CN=s_logstash02v,OU=ServiceAccounts,OU=Employees,DC=domain,DC=local/active_directory]

[2016-06-26 17:04:30,540][DEBUG][shield.authc.activedirectory] [node-01] authenticated user [s_logstash02v], with roles [[logstash, Desktop Admins, Domain Users, PentahoRO, Users]]
[2016-06-26 17:04:30,540][DEBUG][shield.authz.esnative ] [node-01] attempting to load role [Desktop Admins] from index
[2016-06-26 17:04:30,540][DEBUG][shield.authz.esnative ] [node-01] attempting to load role [Domain Users] from index
[2016-06-26 17:04:30,540][DEBUG][shield.authz.esnative ] [node-01] attempting to load role [PentahoRO] from index
[2016-06-26 17:04:30,540][DEBUG][shield.authz.esnative ] [node-01] attempting to load role [Users] from index
[2016-06-26 17:04:30,541][DEBUG][rest.suppressed ] /_bulk Params: {}
ElasticsearchSecurityException[action [indices:data/write/bulk] is unauthorized for user [s_logstash02v]]
at org.elasticsearch.shield.support.Exceptions.authorizationError(Exceptions.java:45)
at org.elasticsearch.shield.authz.InternalAuthorizationService.denialException(InternalAuthorizationService.java:322)
at org.elasticsearch.shield.authz.InternalAuthorizationService.denial(InternalAuthorizationService.java:296)
at org.elasticsearch.shield.authz.InternalAuthorizationService.authorize(InternalAuthorizationService.java:215)
at org.elasticsearch.shield.action.ShieldActionFilter.apply(ShieldActionFilter.java:107)
at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:170)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:144)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:85)
at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359)
at org.elasticsearch.client.FilterClient.doExecute(FilterClient.java:52)
at .....

#elasticsearch.yml shield config:
shield:
authc:
realms:
active_directory:
type: active_directory
order: 0
domain_name: domain.local
url: ldap://domain.local:389
unmapped_groups_as_roles: true
transport:
filter:
enabled: false

#user_roles is default config

My suggestion is to set shield.authc: DEBUG in the config/logging.yml file under the logger section, restart elasticsearch, and then try to authenticate from the other subnet. At that point you can look at the logs and see what groups are retrieved and what roles are mapped.

Hi Jay,

I've done that. That's how I obtained: "[2016-06-26 15:46:53,543][DEBUG][shield.authc.activedirectory] [node-01] authenticated user [s_logstash01v], with roles [[logstash, Desktop Admins, Domain Users, PentahoRO, Users]]"

Are there additional lines that I'm missing?

Yes you should see some lines like:

[2016-05-27 15:20:55,913][DEBUG][shield.authc.support ] the roles [[]], are mapped from these [ldap] groups [[]] for realm [ldap/ldap1]

It should have values for the actual groups retrieved from active directory.

replaced.

Can you provide the lines before that first log line? What version of shield are you using?

The subnet should have nothing to do with the active directory query. I know that's what you are seeing but I do not know anything that would cause such a issue.

I'd try to just use curl and validate that the user can index a document on one subnet vs the other

This should cover it.
[17:37:10,442][I][node ] [n1] version[2.3.3], pid[6842], build[218bdf1/2016-05-17T15:40:04Z]
[17:37:10,971][I][plugins ] [n1] modules [reindex, lang-expression, lang-groovy], plugins [head, license, shield], sites [head]
[17:37:10,997][I][env ] [n1] using [1] data paths, mounts [[/var/lib/elasticsearch (esearch-pool/esearch)]], net usable_space
[152.3gb], net total_space [192.6gb], spins? [possibly], types [zfs]
[17:37:12,970][I][node ] [n1] initialized
[17:37:12,970][I][node ] [n1] starting ...
[17:37:13,251][I][shield.transport ] [n1] publish_address {01:9300}, bound_addresses {01:9300}, {127.0.0.1:9300}
[17:37:13,256][I][discovery ] [n1] OpsCluster-prd/ybzTfEoJR1aF7i0rZXTeqQ
[17:37:16,460][D][shield.authc.esnative ] [n1] security index [.security] does not exist, so service can start
[17:37:16,462][D][license.plugin.core ] [n1] previous [null]
[17:37:16,463][D][license.plugin.core ] [n1] current [{"uid":"a04cfe49-f384-451d-abe4-7cbc4cfa99b9","type":"trial","issue_date_in_millis":146
6799900146,"expiry_date_in_millis":1469391900146,"max_nodes":1000,"issued_to":"OpsCluster-prd","issuer":"elasticsearch","signature":"/////gAAAODhE
pWCrcHWdPdi+zTlWvJ4xsORFu+0hsO59IJiTwilwUsXuNOTs1/n8Y1pO69YNMs074GopHnZNWoR80gyrvZlbXCxzq8YTt+zbs+ld5OxOVaTFh5wAhKNyYA8ZdIjlvwCRckhdQyg1VdOKtdCX6s
S5roROYeSqfdBFOiTDmZv/7zkNTBCr0SdG/m0V0G4CyuitiioE8Of+S/U17Iy9J24kcNshdVTt9XVrT2+FqJNCyp5Wj6PxGF0Tv0v8nDiYzoIKssMFH2uDsQV3qK2Ajj3TxnHDf9XU2ShgWJkt
KlF/A=="}]
[17:37:16,487][D][license.plugin.core ] [n1] notifying [1] listeners
[17:37:16,487][D][license.plugin.core ] [n1] licensee [shield] notified
[17:37:16,487][I][license.plugin.core ] [n1] license [a04cfe49-f384-451d-abe4-7cbc4cfa99b9] - valid
[17:37:16,487][D][license.plugin.core ] [n1] schedule grace notification after [27.1d] for license [a04cfe49-f384-451d-abe4-7cbc4cfa99b9]
[17:37:16,490][D][license.plugin.core ] [n1] scheduled expiry callbacks for [a04cfe49-f384-451d-abe4-7cbc4cfa99b9] expiring after [27.1d]
[17:37:16,684][I][http ] [n1] publish_address {01:9200}, bound_addresses {01:9200}, {127.0.0.1:9200}
[17:37:16,685][I][node ] [n1] started
[17:37:17,313][D][shield.authc.activedirectory] [n1] user not found in cache, proceeding with normal authentication
[17:37:17,517][D][shield.authc.activedirectory] [n1] group SID to DN search filter: [(|(objectSid=S-1-5-32-545)(objectSid=S-1-5-21-3672824143-1806
866617-3368692887-513)(objectSid=S-1-5-21-3672824143-1806866617-3368692887-1836)(objectSid=S-1-5-21-3672824143-1806866617-3368692887-1371))]
[17:37:17,517][D][shield.authc.activedirectory] [n1] group SID to DN search filter: [(|(objectSid=S-1-5-32-545)(objectSid=S-1-5-21-3672824143-1806
866617-3368692887-513)(objectSid=S-1-5-21-3672824143-1806866617-3368692887-1836)(objectSid=S-1-5-21-3672824143-1806866617-3368692887-1371))]
[17:37:17,564][D][shield.authc.activedirectory] [n1] found these groups [[CN=Users,CN=Builtin,DC=domain,DC=local, CN=Domain Users,CN=Users,DC=doma
in,DC=local, CN=PentahoRO,CN=Users,DC=domain,DC=local, CN=Desktop Admins,CN=Users,DC=domain,DC=local]] for userDN [CN=s_logstash02v,OU=ServiceAcco
unts,OU=Employees,DC=domain,DC=local]
[17:37:17,565][D][shield.authc.support ] [n1] the roles [[Desktop Admins, Domain Users, PentahoRO, Users]], are mapped from these [active_dire
ctory] groups [[CN=Users,CN=Builtin,DC=domain,DC=local, CN=Domain Users,CN=Users,DC=domain,DC=local, CN=PentahoRO,CN=Users,DC=domain,DC=local, CN=
Desktop Admins,CN=Users,DC=domain,DC=local]] for realm [active_directory/active_directory]
[17:37:17,565][D][shield.authc.support ] [n1] the roles [[logstash]], are mapped from the user [active_directory] for realm [CN=s_logstash02v,
OU=ServiceAccounts,OU=Employees,DC=domain,DC=local/active_directory]
[17:37:17,570][D][shield.authc.activedirectory] [n1] authenticated user [s_logstash02v], with roles [[logstash, Desktop Admins, Domain Users, Pent
ahoRO, Users]]
[17:37:17,585][D][shield.authz.esnative ] [n1] attempting to load role [Desktop Admins] from index
[17:37:17,585][D][shield.authz.esnative ] [n1] attempting to load role [Domain Users] from index
[17:37:17,585][D][shield.authz.esnative ] [n1] attempting to load role [PentahoRO] from index
[17:37:17,586][D][shield.authz.esnative ] [n1] attempting to load role [Users] from index
[17:37:17,588][D][rest.suppressed ] /_bulk Params: {}
ElasticsearchSecurityException[action [indices:data/write/bulk] is unauthorized for user [s_logstash02v]]

Do you know what index this logstash user is trying to write into? You may be able to see this in the audit log as a access denied entry.

Did you change the logstash role at all?

I do not and have not. However, it's allowed if I moved the server to the same subnet as the ES cluster without changing any other parameters. I would expect if it's a permissions issue on the index, it wouldn't matter which subnet it's on??

Thanks!

The subnet should not matter at all as it does not get taken into account for authentication or authorization. I think there is something else going on and we need to narrow down the problem. We need to figure out which index that the user is getting an authorization exception for first.

More debug... but no index specified.

[20:06:19,293][DEBUG][shield.audit.logfile ] [node-01] [transport] [access_denied] origin_type=[rest], origin_address=[10.1.x.x], principal=[s_logstash02v], action=[indices:data/write/bulk], request=[BulkRequest]
[20:06:19,335][DEBUG][shield.audit.logfile ] [node-01] [transport] [access_denied] origin_type=[rest], origin_address=[10.1.x.x], principal=[s_logstash02v], action=[indices:data/write/bulk], request=[BulkRequest]
[20:06:19,575][DEBUG][shield.audit.logfile ] [node-01] [transport] [access_denied] origin_type=[rest], origin_address=[10.1.x.x], principal=[s_logstash02v], action=[indices:data/write/bulk], request=[BulkRequest]
[20:06:19,639][DEBUG][shield.audit.logfile ] [node-01] [transport] [access_denied] origin_type=[rest], origin_address=[10.1.x.x], principal=[s_logstash02v], action=[indices:data/write/bulk], request=[BulkRequest]

I see how that isn't helpful and know why it doesn't show the indices, but that won't help you now. On the logstash side do you get any other details in the logs? Can you share the logstash configuration?

Here is debug log with the credentials working and the only difference is moving the logstash server into the same subnet as the ES cluster. (i.e., only file changed was /etc/sysconfig-network/ifcfg-devX) and of course DNS record.

[20:49:09,552][DEBUG][shield.audit.logfile ] [node-01] [transport] [access_granted] origin_type=[rest], origin_address=[10.1.x.x], principal=[s_logstash02v], action=[indices:data/write/bulk], request=[BulkRequest]
[20:49:09,554][DEBUG][shield.audit.logfile ] [node-01] [transport] [access_granted] origin_type=[rest], origin_address=[10.1.x.x], principal=[s_logstash02v], action=[indices:data/write/bulk[s]], indices=[topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27], request=[BulkShardRequest]
[20:49:09,554][DEBUG][shield.audit.logfile ] [node-01] [transport] [access_granted] origin_type=[rest], origin_address=[10.1.x.x], principal=[s_logstash02v], action=[indices:data/write/bulk[s]], indices=[topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27], request=[BulkShardRequest]
[20:49:09,555][DEBUG][shield.audit.logfile ] [node-01] [transport] [access_granted] origin_type=[rest], origin_address=[10.1.x.x], principal=[s_logstash02v], action=[indices:data/write/bulk[s]], indices=[topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27], request=[BulkShardRequest]
[20:49:09,555][DEBUG][shield.audit.logfile ] [node-01] [transport] [access_granted] origin_type=[rest], origin_address=[10.1.x.x], principal=[s_logstash02v], action=[indices:data/write/bulk[s]], indices=[topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27], request=[BulkShardRequest]
[20:49:09,556][DEBUG][shield.audit.logfile ] [node-01] [transport] [access_granted] origin_type=[rest], origin_address=[10.1.x.x], principal=[s_logstash02v], action=[indices:data/write/bulk[s]], indices=[topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27,topbeat-2016.06.27], request=[BulkShardRequest]

logtash output config snippet:
} else {
elasticsearch {
hosts => [ "esearch01v:9200" ]
#sniffing => true
user => s_logstash02v
password => '##########'
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

logstash.log snippet:
{:timestamp=>"2016-06-27T21:01:26.252000+0000", :message=>"[403] {"error":{"root_cause":[{"type":"security_exception","reason":"action [indices:data/write/bulk] is unauthorized for user [s_logstash02v]"}],"type":"security_exception","reason":"action [indices:data/write/bulk] is unauthorized for user [s_logstash02v]"},"status":403}", :class=>"Elasticsearch::Transport::Transport::Errors::Forbidden", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:146:in __raise_transport_error'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:256:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:54:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/client.rb:125:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.15/lib/elasticsearch/api/actions/bulk.rb:87:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:53:innon_threadsafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in bulk'", "org/jruby/ext/thread/Mutex.java:149:insynchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:163:insafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:101:in submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:86:inretrying_submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:29:in multi_receive'", "org/jruby/RubyArray.java:1653:ineach_slice'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:28:in multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:130:inworker_multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:114:in multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:inoutput_batch'", "org/jruby/RubyHash.java:1342:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:inoutput_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:224:in worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:193:instart_workers'"], :level=>:warn}
{:timestamp=>"2016-06-27T21:01:26.570000+0000", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :reconnect_backoff_sleep=>0.5, :level=>:warn}

logstash.log snippet #2:
{:timestamp=>"2016-06-27T21:03:10.970000+0000", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["http://esearch01v:9200/"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided?", :error_message=>"[403] {"error":{"root_cause":[{"type":"security_exception","reason":"action [indices:data/write/bulk] is unauthorized for user [s_logstash02v]"}],"type":"security_exception","reason":"action [indices:data/write/bulk] is unauthorized for user [s_logstash02v]"},"status":403}", :error_class=>"Elasticsearch::Transport::Transport::Errors::Forbidden", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:146:in __raise_transport_error'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:256:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:54:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/client.rb:125:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.15/lib/elasticsearch/api/actions/bulk.rb:87:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:53:innon_threadsafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in bulk'", "org/jruby/ext/thread/Mutex.java:149:insynchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:163:insafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:101:in submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:86:inretrying_submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:29:in multi_receive'", "org/jruby/RubyArray.java:1653:ineach_slice'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:28:in multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:130:inworker_multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:114:in multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:inoutput_batch'", "org/jruby/RubyHash.java:1342:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:inoutput_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:224:in worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:193:instart_workers'"], :client_config=>{:hosts=>["http://esearch01v:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :headers=>{"Authorization"=>"..."}, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :level=>:error}

Does the configuration differ at all between the two instances? Just to confirm, when you say 02 on a different subnet, you are changing the IP/network config of the instance?

It is the same instance, just changing the IP/nework config on the server, doing a restart on {logstash,filebeat,topbeat} and ES happily authorizing the instance. Very bizarre!

Hi Jay,

Just following up on if this issue is being actively discussed / worked or if need to look at alternative solutions / products?

Thanks for your help,
Joel

Hi Joel,

I have been thinking about this issue. When you change the ip do you still have all of the same inputs such as filebeat and topbeat coming into the instance?

For debugging purposes can you grant access to '*' (all indices) for the logstash role? Then we should be able to get the indices that are being indexed into from the audit logs and then try to determine what is happening.

I know that many of our test systems are on different subnets including those that connect to active directory and we haven't reproduced this issue.

The lack of indices in the access denied messages is something I plan to address shortly.

Jay