Cluster block

We are getting cluster block exception way before our watermark is reached.

[2019-07-12T00:31:30,354][WARN ][o.e.x.m.e.l.LocalExporter] [elkcoord02.ytz0] unexpected error while indexing monitoring document
org.elasticsearch.xpack.monitoring.exporter.ExportException: ClusterBlockException[blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];]
        at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:128) ~[?:?]
        at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[?:1.8.0_191]
        at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) ~[?:1.8.0_191]
        at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:1.8.0_191]
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[?:1.8.0_191]
        at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[?:1.8.0_191]
        at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) ~[?:1.8.0_191]
        at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) ~[?:1.8.0_191]
        at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_191]
        at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) ~[?:1.8.0_191]
        at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:129) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$0(LocalBulk.java:111) ~[?:?]
        at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:60) ~[elasticsearch-6.6.1.jar:6.6.1]
        at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:43) ~[elasticsearch-6.6.1.jar:6.6.1]
        at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:85) ~[elasticsearch-6.6.1.jar:6.6.1]
        at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:81) ~[elasticsearch-6.6.1.jar:6.6.1]
        at org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:43) ~[elasticsearch-6.6.1.jar:6.6.1]
        at org.elasticsearch.action.bulk.TransportBulkAction$BulkRequestModifier.lambda$wrapActionListenerIfNeeded$0(TransportBulkAction.java:645) ~[elasticsearch-6.6.1.jar:6.6.1]
        at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:60) ~[elasticsearch-6.6.1.jar:6.6.1]
        at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.finishHim(TransportBulkAction.java:450) ~[elasticsearch-6.6.1.jar:6.6.1]
        at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation$1.onFailure(TransportBulkAction.java:445) ~[elasticsearch-6.6.1.jar:6.6.1]
        at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:91) ~[elasticsearch-6.6.1.jar:6.6.1]
        at org.elasticsearch.action.support.ContextPreservingActionListener.onFailure(ContextPreservingActionListener.java:50) ~[elasticsearch-6.6.1.jar:6.6.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:946) ~[elasticsearch-6.6.1.jar:6.6.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:786) ~[elasticsearch-6.6.1.jar:6.6.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.6.1.jar:6.6.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction.doExecute(TransportReplicationAction.java:172) ~[elasticsearch-6.6.1.jar:6.6.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction.doExecute(TransportReplicationAction.java:100) ~[elasticsearch-6.6.1.jar:6.6.1]
        at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:167) ~[elasticsearch-6.6.1.jar:6.6.1]
        at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.lambda$apply$0(SecurityActionFilter.java:88) ~[?:?]
        at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:60) ~[elasticsearch-6.6.1.jar:6.6.1]
        at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.lambda$authorizeRequest$4(SecurityActionFilter.java:177) ~[?:?]
        at org.elasticsearch.xpack.security.authz.AuthorizationUtils$AsyncAuthorizer.maybeRun(AuthorizationUtils.java:177) ~[?:?]
        at org.elasticsearch.xpack.security.authz.AuthorizationUtils$AsyncAuthorizer.setRunAsRoles(AuthorizationUtils.java:171) ~[?:?]
        at org.elasticsearch.xpack.security.authz.AuthorizationUtils$AsyncAuthorizer.authorize(AuthorizationUtils.java:159) ~[?:?]
        at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.authorizeRequest(SecurityActionFilter.java:179) ~[?:?]
        at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.lambda$applyInternal$3(SecurityActionFilter.java:157) ~[?:?]
        at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:60) ~[elasticsearch-6.6.1.jar:6.6.1]
        at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.lambda$authenticateAsync$2(AuthenticationService.java:177) ~[?:?]
        at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.lambda$lookForExistingAuthentication$4(AuthenticationService.java:210) ~[?:?]
        at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.lookForExistingAuthentication(AuthenticationService.java:221) ~[?:?]
        at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.authenticateAsync(AuthenticationService.java:175) ~[?:?]
        at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.access$000(AuthenticationService.java:134) ~[?:?]
        at org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:104) ~[?:?]
        at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.applyInternal(SecurityActionFilter.java:156) ~[?:?]

The disk availability is 74.87%. With default watermark values, what could be going wrong ?

The cluster settings are:

{
  "persistent" : {
    "cluster" : {
      "routing" : {
        "allocation" : {
          "allow_rebalance" : "indices_primaries_active",
          "cluster_concurrent_rebalance" : "100",
          "node_concurrent_recoveries" : "100",
          "node_initial_primaries_recoveries" : "500"
        }
      }
    },
    "xpack" : {
      "monitoring" : {
        "collection" : {
          "enabled" : "true"
        }
      }
    }
  },
  "transient" : { }
}

It probably means that at some point your free disk space was less than 5% according to cluster.routing.allocation.disk.watermark.flood_stage: https://www.elastic.co/guide/en/elasticsearch/reference/current/disk-allocator.html.

Yes, I understand that is the obvious possibility but there was no such scenario. The available disk never went below 60%.

Even while merging some segments?

There are only two ways that this block can be applied: either manually via the API, or automatically if disk usage went above the flood_stage watermark. If it was due to disk usage then there will be evidence in your logs.

I have checked all logs. the flood_stage watermark has not been reached.

even while merging segments there should not be disk usage that goes so high because double of 30% is 60%.

In which case someone or something applied this block via the API.

No. That is not the case.


I am not sure whether this exception is relevant.

The ILM policy is as follows:

{
  "hot_warm_purge_policy" : {
    "version" : 1,
    "modified_date" : "2019-06-28T13:35:49.675Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : { }
        },
        "delete" : {
          "min_age" : "4d",
          "actions" : {
            "delete" : { }
          }
        },
        "warm" : {
          "min_age" : "2d",
          "actions" : {
            "allocate" : {
              "include" : { },
              "exclude" : { },
              "require" : {
                "box_type" : "warm"
              }
            },
            "forcemerge" : {
              "max_num_segments" : 1
            }
          }
        }
      }
    }
  }
}

I am facing similar issue and looking for a fix

@Hari_Prasad as I said above:

The solution is in the documentation link the other David shared in this thread:

@DavidTurner , @dadoonet how can I measure the amount of disk space needed for segment merging ?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.