ES crashing multiple times, over 1Billion docs a day, indexing rate falling from 25k/s to 2k/s

GET / output:

{
  "name": "es-ingest-3",
  "cluster_name": "ct",
  "cluster_uuid": "IED4OO5CR6ur7ZCRNmfWEg",
  "version": {
    "number": "6.1.2",
    "build_hash": "5b1fea5",
    "build_date": "2018-01-10T02:35:59.208Z",
    "build_snapshot": false,
    "lucene_version": "7.1.0",
    "minimum_wire_compatibility_version": "5.6.0",
    "minimum_index_compatibility_version": "5.0.0"
  },
  "tagline": "You Know, for Search"
}

GET _cat/nodes?v output:

ip            heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.4.128.2           9          97   5    0.52    0.25     0.13 m         -      es-master-3
10.4.0.3             11          92  17    0.13    0.21     0.29 m         -      es-master-1
10.4.0.7             38          86  93   11.51    6.04     3.68 i         -      es-ingest-1
10.5.0.2             39          69   6    0.60    0.43     0.22 d         -      es-data-3
10.6.0.4             61          67  38    1.74    1.30     0.96 i         -      es-ingest-2
10.7.0.2             12          99  11    0.61    0.57     0.55 m         -      es-master-2
10.1.0.2             19          88  22    1.37    0.95     0.73 d         -      es-data-4
10.9.128.2           24          90  19    1.50    0.94     0.78 d         -      es-data-7
10.1.192.2           12          97  24    1.15    1.39     0.79 d         -      es-data-0
10.3.0.9             48          99  64    4.75    3.65     3.33 d         -      es-data-1
10.2.0.2             71          96  30    1.21    0.97     0.93 d         -      es-data-6
10.7.128.6           61          69  44    0.86    0.76     0.70 i         -      es-ingest-0
10.6.8.4             15          97   3    0.07    0.09     0.09 m         -      es-master-4
10.9.0.3             25          80   5    0.18    0.30     0.43 m         *      es-master-0
10.6.128.2           59          90  49    4.31    3.92     4.04 d         -      es-data-9
10.2.128.2           21          68   6    0.26    0.16     0.10 d         -      es-data-5
10.8.0.2             68          78  34    1.33    1.08     1.08 d         -      es-data-8
10.7.128.5           54          69  45    0.86    0.76     0.70 i         -      es-ingest-3
10.3.128.2           74          91  22    1.27    0.77     0.68 d         -      es-data-2

Average indexing rate : [6k to 10k per sec]

When I perform the following steps the indexing rate increased to 25k to 30k per sec :

PUT /logstash-*/_settings
{
    "index" : {
        "refresh_interval" : "-1"
    }
}
PUT /logstash-*/_settings
{
    "index" : {
        "number_of_replicas" : "0"
    }
}

By performing the above steps it improved the rate and ran very well for about 10days indexing nearly 17Billion documents. But it failed when there was a spike of 1.3 Billion documents per day. Since then, this has not been stable, every time I perform the above steps, the cluster indexing rate increases for a few hours and then crashes again.

Then I have been performing the following steps in the same order :

PUT /logstash-*/_settings
{
    "index" : {
        "number_of_replicas" : "1"
    }
}
PUT /_cluster/settings
{
    "transient" : {
        "cluster.routing.allocation.enable" : "none"
    }
}

Waited for all the shards to be reallocated and then did the following steps :

PUT _cluster/settings
{
  "persistent": {
    "cluster.routing.allocation.enable": "primaries"
  }
}
PUT /_cluster/settings
{
    "transient" : {
        "cluster.routing.allocation.enable" : "all"
    }
}
PUT /logstash-*/_settings
{
    "index" : {
        "refresh_interval" : "-1"
    }
}
PUT /logstash-*/_settings
{
    "index" : {
        "number_of_replicas" : "0"
    }
}

We get an average 800 Million documents everyday, sometimes we get spikes of around 1.3 Billion per day. Please suggest the best way to address this. We currently have a 7 days backlog which needs to be cleared.

I am new to managing ES, please guide me.

I had to restart all Logstash containers and the indexing improved again but failed after few hours. This process keeps repeating with the following logs :

[2019-06-06T09:17:50,436][WARN ][o.e.c.r.a.DiskThresholdMonitor] [es-master-0] high disk watermark [50gb] exceeded on [M6fzMFiuT_Ks7R0layHpIg][es-data-9][/data/nodes/0] free: 49.9gb[5%], shards will be relocated away from this node 
[2019-06-06T09:17:50,436][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-master-0] low disk watermark [100gb] exceeded on [q4M5NMS1T5GguZor8fHDSw][es-data-8][/data/nodes/0] free: 51.6gb[5.2%], replicas will not be assigned to this node 
[2019-06-06T09:17:50,436][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-master-0] rerouting shards: [high disk watermark exceeded on one or more nodes] 
[2019-06-06T09:18:50,724][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-master-0] low disk watermark [100gb] exceeded on [M6fzMFiuT_Ks7R0layHpIg][es-data-9][/data/nodes/0] free: 50gb[5%], replicas will not be assigned to this node 
[2019-06-06T09:18:50,724][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-master-0] low disk watermark [100gb] exceeded on [q4M5NMS1T5GguZor8fHDSw][es-data-8][/data/nodes/0] free: 51.2gb[5.2%], replicas will not be assigned to this node 
[2019-06-06T09:19:50,941][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-master-0] low disk watermark [100gb] exceeded on [q4M5NMS1T5GguZor8fHDSw][es-data-8][/data/nodes/0] free: 51.3gb[5.2%], replicas will not be assigned to this node 
[2019-06-06T09:19:50,941][WARN ][o.e.c.r.a.DiskThresholdMonitor] [es-master-0] high disk watermark [50gb] exceeded on [M6fzMFiuT_Ks7R0layHpIg][es-data-9][/data/nodes/0] free: 49.5gb[5%], shards will be relocated away from this node 
[2019-06-06T09:19:50,941][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-master-0] rerouting shards: [high disk watermark exceeded on one or more nodes] 
[2019-06-06T09:20:51,070][WARN ][o.e.c.r.a.DiskThresholdMonitor] [es-master-0] high disk watermark [50gb] exceeded on [M6fzMFiuT_Ks7R0layHpIg][es-data-9][/data/nodes/0] free: 49.1gb[4.9%], shards will be relocated away from this node 
[2019-06-06T09:20:51,070][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-master-0] low disk watermark [100gb] exceeded on [q4M5NMS1T5GguZor8fHDSw][es-data-8][/data/nodes/0] free: 50.7gb[5.1%], replicas will not be assigned to this node 
[2019-06-06T09:20:51,070][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-master-0] rerouting shards: [high disk watermark exceeded on one or more nodes] 
[2019-06-06T09:21:51,073][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-master-0] low disk watermark [100gb] exceeded on [q4M5NMS1T5GguZor8fHDSw][es-data-8][/data/nodes/0] free: 50.3gb[5.1%], replicas will not be assigned to this node 
[2019-06-06T09:21:51,073][WARN ][o.e.c.r.a.DiskThresholdMonitor] [es-master-0] high disk watermark [50gb] exceeded on [M6fzMFiuT_Ks7R0layHpIg][es-data-9][/data/nodes/0] free: 48.7gb[4.9%], shards will be relocated away from this node 
[2019-06-06T09:21:51,073][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-master-0] rerouting shards: [high disk watermark exceeded on one or more nodes] 
[2019-06-06T09:22:51,245][WARN ][o.e.c.r.a.DiskThresholdMonitor] [es-master-0] high disk watermark [50gb] exceeded on [q4M5NMS1T5GguZor8fHDSw][es-data-8][/data/nodes/0] free: 49.8gb[5%], shards will be relocated away from this node 
[2019-06-06T09:22:51,245][WARN ][o.e.c.r.a.DiskThresholdMonitor] [es-master-0] high disk watermark [50gb] exceeded on [M6fzMFiuT_Ks7R0layHpIg][es-data-9][/data/nodes/0] free: 49gb[4.9%], shards will be relocated away from this node 
[2019-06-06T09:22:51,245][INFO ][o.e.c.r.a.DiskThresholdMonitor] [es-master-0] rerouting shards: [high disk watermark exceeded on one or more nodes] `

and

[2019-06-06T11:02:09,629][WARN ][o.e.c.a.s.ShardStateAction] [es-master-0] [logstash-2019.05.31][1] received shard failed for shard id [[logstash-2019.05.31][1]], allocation id [j32CrqOtR9eeqwKxujusNg], primary term [1], message [failed to perform indices:data/write/bulk[s] on replica [logstash-2019.05.31][1], node[y_u8hfExQgeHsnZUiXTgEg], relocating [q4M5NMS1T5GguZor8fHDSw], [P], s[RELOCATING], a[id=j32CrqOtR9eeqwKxujusNg, rId=xQNBvAklQCuyCwADuIOazg], expected_shard_size[34874350916]], failure [RemoteTransportException[[es-data-6][10.2.0.2:9300][indices:data/write/bulk[s][r]]]; nested: IllegalStateException[active primary shard [logstash-2019.05.31][1], node[y_u8hfExQgeHsnZUiXTgEg], relocating [q4M5NMS1T5GguZor8fHDSw], [P], s[RELOCATING], a[id=j32CrqOtR9eeqwKxujusNg, rId=xQNBvAklQCuyCwADuIOazg], expected_shard_size[34874350916] cannot be a replication target before relocation hand off, state is [CLOSED]]; ] org.elasticsearch.transport.RemoteTransportException: [es-data-6][10.2.0.2:9300][indices:data/write/bulk[s][r]] 
Caused by: java.lang.IllegalStateException: active primary shard [logstash-2019.05.31][1], node[y_u8hfExQgeHsnZUiXTgEg], relocating [q4M5NMS1T5GguZor8fHDSw], [P], s[RELOCATING], a[id=j32CrqOtR9eeqwKxujusNg, rId=xQNBvAklQCuyCwADuIOazg], expected_shard_size[34874350916] cannot be a replication target before relocation hand off, state is [CLOSED]     
at org.elasticsearch.index.shard.IndexShard.verifyReplicationTarget(IndexShard.java:1479) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.index.shard.IndexShard.ensureWriteAllowed(IndexShard.java:1462) ~[elasticsearch-6.1.2.jar:6.1.2]    
 at org.elasticsearch.index.shard.IndexShard.applyIndexOperation(IndexShard.java:683) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.index.shard.IndexShard.applyIndexOperationOnReplica(IndexShard.java:674) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.action.bulk.TransportShardBulkAction.performOpOnReplica(TransportShardBulkAction.java:518) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.action.bulk.TransportShardBulkAction.performOnReplica(TransportShardBulkAction.java:480) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnReplica(TransportShardBulkAction.java:466) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnReplica(TransportShardBulkAction.java:72) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.onResponse(TransportReplicationAction.java:566) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.onResponse(TransportReplicationAction.java:529) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.index.shard.IndexShard$2.onResponse(IndexShard.java:2305) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.index.shard.IndexShard$2.onResponse(IndexShard.java:2283) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:238) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.index.shard.IndexShard.acquireReplicaOperationPermit(IndexShard.java:2282) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.doRun(TransportReplicationAction.java:640) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:512) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:492) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1554) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:637) ~[elasticsearch-6.1.2.jar:6.1.2]     
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.1.2.jar:6.1.2]     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_151]     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_151]     at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]

It looks like you are running out of disk space on a number of the nodes, which is causing problems. What is the output of the cluster stats API? What is the specification of the data nodes? Have you gone through this guide in order to optimize storage?

@Christian_Dahlqvist Thank you so much for a quick response.

{
  "_nodes": {
    "total": 19,
    "successful": 19,
    "failed": 0
  },
  "cluster_name": "CT",
  "timestamp": 1559840080745,
  "status": "yellow",
  "indices": {
    "count": 46,
    "shards": {
      "total": 312,
      "primaries": 262,
      "replication": 0.19083969465648856,
      "index": {
        "shards": {
          "min": 2,
          "max": 19,
          "avg": 6.782608695652174
        },
        "primaries": {
          "min": 1,
          "max": 10,
          "avg": 5.695652173913044
        },
        "replication": {
          "min": 0,
          "max": 1,
          "avg": 0.5391304347826087
        }
      }
    },
    "docs": {
      "count": 22460095427,
      "deleted": 377088037
    },
    "store": {
      "size": "9.7tb",
      "size_in_bytes": 10710142406747
    },
    "fielddata": {
      "memory_size": "1.5mb",
      "memory_size_in_bytes": 1596184,
      "evictions": 0
    },
    "query_cache": {
      "memory_size": "579.8mb",
      "memory_size_in_bytes": 607981662,
      "total_count": 15253843,
      "hit_count": 5854961,
      "miss_count": 9398882,
      "cache_size": 28731,
      "cache_count": 190790,
      "evictions": 162059
    },
    "completion": {
      "size": "0b",
      "size_in_bytes": 0
    },
    "segments": {
      "count": 9582,
      "memory": "12.6gb",
      "memory_in_bytes": 13614256626,
      "terms_memory": "9.2gb",
      "terms_memory_in_bytes": 9954853209,
      "stored_fields_memory": "3gb",
      "stored_fields_memory_in_bytes": 3235097416,
      "term_vectors_memory": "0b",
      "term_vectors_memory_in_bytes": 0,
      "norms_memory": "4.8kb",
      "norms_memory_in_bytes": 4992,
      "points_memory": "391.5mb",
      "points_memory_in_bytes": 410559161,
      "doc_values_memory": "13.1mb",
      "doc_values_memory_in_bytes": 13741848,
      "index_writer_memory": "2.1gb",
      "index_writer_memory_in_bytes": 2279877502,
      "version_map_memory": "1.1gb",
      "version_map_memory_in_bytes": 1183825372,
      "fixed_bit_set": "9mb",
      "fixed_bit_set_memory_in_bytes": 9528888,
      "max_unsafe_auto_id_timestamp": 1559812962626,
      "file_sizes": {}
    }
  },
  "nodes": {
    "count": {
      "total": 19,
      "data": 10,
      "coordinating_only": 0,
      "master": 5,
      "ingest": 4
    },
    "versions": [
      "6.1.2"
    ],
    "os": {
      "available_processors": 76,
      "allocated_processors": 76,
      "names": [
        {
          "name": "Linux",
          "count": 19
        }
      ],
      "mem": {
        "total": "436.3gb",
        "total_in_bytes": 468512059392,
        "free": "25.9gb",
        "free_in_bytes": 27917049856,
        "used": "410.3gb",
        "used_in_bytes": 440595009536,
        "free_percent": 6,
        "used_percent": 94
      }
    },
    "process": {
      "cpu": {
        "percent": 84
      },
      "open_file_descriptors": {
        "min": 583,
        "max": 61263,
        "avg": 7486
      }
    },
    "jvm": {
      "max_uptime": "264.6d",
      "max_uptime_in_millis": 22865457255,
      "versions": [
        {
          "version": "1.8.0_151",
          "vm_name": "OpenJDK 64-Bit Server VM",
          "vm_version": "25.151-b12",
          "vm_vendor": "Oracle Corporation",
          "count": 19
        }
      ],
      "mem": {
        "heap_used": "81.5gb",
        "heap_used_in_bytes": 87553822600,
        "heap_max": "167.3gb",
        "heap_max_in_bytes": 179726188544
      },
      "threads": 1308
    },
    "fs": {
      "total": "14.5tb",
      "total_in_bytes": 15956139687936,
      "free": "4.6tb",
      "free_in_bytes": 5081616539648,
      "available": "3.9tb",
      "available_in_bytes": 4323713826816
    },
    "plugins": [
      {
        "name": "ingest-user-agent",
        "version": "6.1.2",
        "description": "Ingest processor that extracts information from a user agent",
        "classname": "org.elasticsearch.ingest.useragent.IngestUserAgentPlugin",
        "has_native_controller": false,
        "requires_keystore": false
      },
      {
        "name": "ingest-geoip",
        "version": "6.1.2",
        "description": "Ingest processor that uses looksup geo data based on ip adresses using the Maxmind geo database",
        "classname": "org.elasticsearch.ingest.geoip.IngestGeoIpPlugin",
        "has_native_controller": false,
        "requires_keystore": false
      },
      {
        "name": "x-pack",
        "version": "6.1.2",
        "description": "Elasticsearch Expanded Pack Plugin",
        "classname": "org.elasticsearch.xpack.XPackPlugin",
        "has_native_controller": true,
        "requires_keystore": true
      }
    ],
    "network_types": {
      "transport_types": {
        "netty4": 19
      },
      "http_types": {
        "netty4": 19
      }
    }
  }
}

Please format your code, logs or configuration files using </> icon as explained in this guide and not the citation button. It will make your post more readable.

Or use markdown style like: "```"

This is the icon to use if you are not using markdown format:

There's a live preview panel for exactly this reasons.

Lots of people read these forums, and many of them will simply skip over a post that is difficult to read, because it's just too large an investment of their time to try and follow a wall of badly formatted text.
If your goal is to get an answer to your questions, it's in your interest to make it as easy to read and understand as possible.

@aravindputrevu Thank you for the advise. Editing has been done. Would you be able to suggest me with the right ES configuration to make sure we wont run into this issue again?

Why dont you use tool like "cerebro" then you have better system visibility , monitoring and easy to tolerate the issues as well. Also please check the watermark levels. IF it reach the flood level ES will hang or crash.

This is simply false. Elasticsearch stops accepting writes when it hits the flood-stage watermark, but it neither hangs nor crashes.

The only error you've shared has been to do with your disks being too full, and @Christian_Dahlqvist linked you to a guide to help you reduce the space needed in their post above. You can't be sure not to run into this issue again, since it depends on how much data you're trying to store. You can at least follow the guide and monitor your cluster to make sure you're not overfilling it.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.