Access denied in accessing the elastic search index files

Hi,

I am in the process to set up a data analytical project
using ELK pipeline over a huge amount of apache log files. One of
the required data features is to continuously update the session IDs for every
single web transaction in the Apache log. The Java Client API v2.1.1 is used to perform
the data update.

I’ve encountered following problems during the test and
debugging and hope anyone who has similar experience could give me some clues.

Processor: Intel Core i7-4790 3.6GHz

RAM: 16GB

Disk Space: 465GB on
data drive

ELK Spec: logstash-2.1.0 , elasticsearch-2.1.1 running on single-node machine and

, kibana-4.3.0,

Client API: SearchScroll, Batch API and partial update API

Amount of Data: 500 million records

  1.  Elastic Search running slowly
    

It took about 10 days to load 500 million
log records to Elastic Search 2.1.1. Does the figure seem normal?

  1.  Failure of Batch API 
    

The data update transactions are requested by
batch API, however it seems no matter what values of bulk size and
timeout/retry interval are, the last batch update in a series of updates
transactions always fails throwing a NoNodeAvailableException.

  1.  Random Exception of Access Denied Error
    

If partial data update is invoked directly
without batch API, the random error appears like following. The error may
happen after every few million update transactions.

[2016-01-21
08:59:06,303][DEBUG][action.admin.indices.stats] [Johnny Storm] [inices:monitor/stats] failed to execute
operation for shard [[akamai_access_log_2015.05][2], node[MD8IEMRHRlyCqT3MSuwjhw],
[P], v[12], s[STARTED], a[id=vN3wl3mwS-bq9UNJsqxNA]]
[akamai_access_log_2015.05][[akamai_access_log_2015.05][2]]
BroadcastShardOperationFailedException[operationindices:monitor/stats failed]; nested: ElasticsearchException[failed to refresh store stats];
nested: AccessDeniedException[D:\development\projects\akamai\elasticsearch-2.1.1\data\elasticsearch\nodes\0\indices\akamai_access_log_2015.05\2\index_b93d.si];

The above exceptions look like synchronizing issue between
data transactions , however without sufficient insight on Elasticsearch
implementation, I feel I am sitting in darkness and have no idea how to get rid
of the issues. Hope the descriptions give you some glimpses of what the actual problems are. I am happy to provide further details if needed.

Regards,
Landong

Hi Landong,

It took about 10 days to load 500 million log records to Elastic Search 2.1.1. Does the figure seem normal?

No, not at all.

Failure of Batch API ... fails throwing a NoNodeAvailableException

This implies that when you connect to the cluster, it fails. Either a firewall is blocking it or perhaps there is no elected master node. This has nothing to do with

How are you attempting to connect? TransportClient or NodeClient?

Random Exception of Access Denied Error

Two thoughts on the last exception:

nested: AccessDeniedException[D:\development\projects\akamai\elasticsearch-2.1.1\data\elasticsearch\nodes\0\indices\akamai_access_log_2015.05\2\index_b93d.si];

Can you check Elasticsearch's logs for a more in-depth error here? You should be able to search the logs of the "Johnny Storm" node for it (though if you've restarted the node, then its name has changed; you should manually name your node's). This may signal some type of access rights failure, perhaps due to a manual file copy?.

Relatedly, if you are running it as a service under Windows, then ensure that the D: drive is not a networked mounted drive. If that's the case, then Windows Services cannot read from mapped network drives, but they can use the UNC path. Chances are this is not the case since it failed on a specific file at this point.

Hello Chris,

thank you for the information, which is very helpful .
May I ask any further questions regarding your answer?
[/quote][quote="pickypg, post:2, topic:39761"]
No, not at all.
[/quote]
IS there a way to identify who is accountable for an underperforming pipeline, logstash or Elasticsearch? I have some regex patterns in the logstash filter but I don't expect any significant impact on performance.

[quote="pickypg, post:2, topic:39761"]
NoNodeAvailableException
I don't understand , how come the prior update transactions went through successfully, but the last batch update trasnaction got blocked. I use TransportClient.
Is it possible that exception was thrown under any other occasions? I spotted that if a partial update request was sent to a different index with non-exisiting ID, this would cause the same exception being thrown. But I am sure this is not the case.

NO, I am afraid no manual file copy was conducted. the exception was thrown on a random basis, which looks like a synchronizing issue between different threads within Elasticsearch. I am running two threads concurrently on the client side, searchscroll and batch update. I am wondering if it is possible searchscroll might have locked the index file, any clues?

I've attached two sample exceptions that were found in the Elasticsearch log file, hope they help.

Regards,
Landong

Sorry, I have to put the log in a separate post due to the size limit of the discussion forum.

[akamai_access_log_2015.05][[akamai_access_log_2015.05][1]] BroadcastShardOperationFailedException[operation indices:monitor/stats failed]; nested: ElasticsearchException[failed to refresh store stats]; nested: AccessDeniedException[D:\development\projects\akamai\elasticsearch-2.1.1\data\elasticsearch\nodes\0\indices\akamai_access_log_2015.05\1\index_bv9b.cfe];
at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:405)
at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:382)
at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:371)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: ElasticsearchException[failed to refresh store stats]; nested: AccessDeniedException[D:\development\projects\akamai\elasticsearch-2.1.1\data\elasticsearch\nodes\0\indices\akamai_access_log_2015.05\1\index_bv9b.cfe];
at org.elasticsearch.index.store.Store$StoreStatsCache.refresh(Store.java:1534)
at org.elasticsearch.index.store.Store$StoreStatsCache.refresh(Store.java:1519)
at org.elasticsearch.common.util.SingleObjectCache.getOrRefresh(SingleObjectCache.java:55)
at org.elasticsearch.index.store.Store.stats(Store.java:293)
at org.elasticsearch.index.shard.IndexShard.storeStats(IndexShard.java:659)
at org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:134)
at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:165)
at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47)
at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:401)
... 7 more
Caused by: java.nio.file.AccessDeniedException: D:\development\projects\akamai\elasticsearch-2.1.1\data\elasticsearch\nodes\0\indices\akamai_access_log_2015.05\1\index_bv9b.cfe
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
at sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
at sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.Files.size(Files.java:2332)
at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:198)
at org.apache.lucene.store.FilterDirectory.fileLength(FilterDirectory.java:67)
at org.apache.lucene.store.FilterDirectory.fileLength(FilterDirectory.java:67)
at org.elasticsearch.index.store.Store$StoreStatsCache.estimateSize(Store.java:1543)
at org.elasticsearch.index.store.Store$StoreStatsCache.refresh(Store.java:1532)
... 15 more

Jan 23, 2016 11:40:21 PM org.elasticsearch.action.support.ThreadedActionListener
$2 onFailure
WARNING: [Jaren] failed to execute failure callback on [org.elasticsearch.action
.bulk.BulkProcessor$1@2dbb56e0], failure [NoNodeAvailableException[None of the c
onfigured nodes were available: [{Occulus}{uQkZ94fqTzWK3LrTZKZUrw}{127.0.0.1}{lo
calhost/127.0.0.1:9300}]]; nested: NodeDisconnectedException[[Occulus][localhost
/127.0.0.1:9300][indices:data/write/bulk] disconnected];]
EsRejectedExecutionException[rejected execution of org.elasticsearch.action.supp
ort.ThreadedActionListener$2@6048a598 on EsThreadPoolExecutor[listener, org.elas
ticsearch.common.util.concurrent.EsThreadPoolExecutor@64f6f383[Terminated, pool
size = 0, active threads = 0, queued tasks = 0, completed tasks = 83]]]
at org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecut
ion(EsAbortPolicy.java:50)
at java.util.concurrent.ThreadPoolExecutor.reject(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.execute(Unknown Source)
at org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor.execute
(EsThreadPoolExecutor.java:85)
at org.elasticsearch.action.support.ThreadedActionListener.onFailure(Thr
eadedActionListener.java:101)
at org.elasticsearch.client.transport.TransportClientNodesService$RetryL
istener.onFailure(TransportClientNodesService.java:244)
at org.elasticsearch.action.ActionListenerResponseHandler.handleExceptio
n(ActionListenerResponseHandler.java:46)
at org.elasticsearch.transport.TransportService$Adapter$3.run(TransportS
ervice.java:588)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)

There was a coding error in the application, the TransportClient was closed too soon to allow the last batch process request to be completed. After changing the logic, the bulk process API works fine now. sorry about the confusion, the following request was false alarm and need a withdrawing

Failure of Batch API

The data update transactions are requested by
batch API, however it seems no matter what values of bulk size and
timeout/retry interval are, the last batch update in a series of updates
transactions always fails throwing a NoNodeAvailableException.

Which version was this problem fixed in? I am running logstash 2.3.3 and elasticsearch 2.3.3 and I still get the error.

It occurs multiple times per day, say 2-3 times an hour. Both nodes are running under the same account. It occurs on different indices each time, and does not appear to do lasting damage (an index with the problem today will be fine tomorrow). When I go to look at the specific file it fails on, it doesn't exist in the folder. It looks like a file gets removed during the operation and so the "access denied" occurs when a node tries to access the now deleted file.

Thank you

[2016-08-24 09:52:26,909][DEBUG][action.admin.indices.stats] [MyNode] [indices:monitor/stats] failed to execute operation for shard [[my_index][2], node[ENcsHMCVRc2bZs_9tQPtAA], [P], v[682], s[STARTED], a[id=HRD2FcLkTDSRaXIpQ5AnBw]]
ElasticsearchException[failed to refresh store stats]; nested: AccessDeniedException[G:\elasticsearch\data\cluster\nodes\0\indices\my_index\2\index_13m5.si];
at org.elasticsearch.index.store.Store$StoreStatsCache.refresh(Store.java:1532)
at org.elasticsearch.index.store.Store$StoreStatsCache.refresh(Store.java:1517)
at org.elasticsearch.common.util.SingleObjectCache.getOrRefresh(SingleObjectCache.java:55)
at org.elasticsearch.index.store.Store.stats(Store.java:293)
at org.elasticsearch.index.shard.IndexShard.storeStats(IndexShard.java:702)
at org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:134)
at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:165)
at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47)
at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:420)
at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:399)
at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:386)
at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.nio.file.AccessDeniedException: G:\elasticsearch\data\cluster\nodes\0\indices\my_index\2\index_13m5.si
at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
at sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(Unknown Source)
at sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(Unknown Source)
at sun.nio.fs.WindowsFileSystemProvider.readAttributes(Unknown Source)
at java.nio.file.Files.readAttributes(Unknown Source)
at java.nio.file.Files.size(Unknown Source)
at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:210)
at org.apache.lucene.store.FilterDirectory.fileLength(FilterDirectory.java:67)
at org.apache.lucene.store.FilterDirectory.fileLength(FilterDirectory.java:67)
at org.elasticsearch.index.store.Store$StoreStatsCache.estimateSize(Store.java:1541)
at org.elasticsearch.index.store.Store$StoreStatsCache.refresh(Store.java:1530)
... 17 more