Daily index not created

Hi Team,

On my single test server with 8G of RAM (1955m to JVM) having es v 7.4, I have 12 application indices + few system indices like (.monitoring-es-7-2021.08.02, .monitoring-logstash-7-2021.08.02, .monitoring-kibana-7-2021.08.02) getting created daily. So on an average I can see daily es creates 15 indices.

today I can see only two indices are created.

curl -slient -u elastic:xxxxx 'http://127.0.0.1:9200/_cat/indices?v' -u elastic |  grep '2021.08.03'
Enter host password for user 'elastic':
yellow open   metricbeat-7.4.0-2021.08.03                KMJbbJMHQ22EM5Hfw   1   1     110657            0     73.9mb         73.9mb
green  open   .monitoring-kibana-7-2021.08.03            98iEmlw8GAm2rj-xw   1   0          3            0      1.1mb          1.1mb

and reason for above I think is below,

While searching for logs,

e.s logs

[2021-08-03T12:14:15,394][WARN ][o.e.x.m.e.l.LocalExporter] [elasticsearch_1] unexpected error while indexing monitoring document org.elasticsearch.xpack.monitoring.exporter.ExportException: org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [1] total shards, but this cluster currently has [1000]/[1000] maximum shards open;

logstash logs for application index and filebeat index

``
[2021-08-03T05:18:05,246][WARN ][logstash.outputs.elasticsearch][main] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"ping_server-2021.08.03", :_type=>"_doc", :routing=>nil}, #LogStash::Event:0x44b98479], :response=>{"index"=>{"_index"=>"ping_server-2021.08.03", "_type"=>"_doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"validation_exception", "reason"=>"Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"}}}}

[2021-08-03T05:17:38,230][WARN ][logstash.outputs.elasticsearch][main] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-7.4.0-2021.08.03", :_type=>"_doc", :routing=>nil}, #LogStash::Event:0x1e2c70a8], :response=>{"index"=>{"_index"=>"filebeat-7.4.0-2021.08.03", "_type"=>"_doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"validation_exception", "reason"=>"Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"}}}}
``

Adding active and unassigned shards totals to 1000

"active_primary_shards" : 512,
  "active_shards" : 512,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 488,
  "delayed_unassigned_shards" : 0,
  "active_shards_percent_as_number" : 51.2

if I check with below command, I see all unassigned shards are replica shards

curl -slient -XGET -u elastic:xxxx http://localhost:9200/_cat/shards | grep 'UNASSIGNED'

.
.
dev_app_server-2021.07.10           0 r UNASSIGNED                            
apm-7.4.0-span-000028               0 r UNASSIGNED                                                      
ping_server-2021.07.02              0 r UNASSIGNED                            
api_app_server-2021.07.17           0 r UNASSIGNED                            
consent_app_server-2021.07.15       0 r UNASSIGNED   

Q. So for now, can I safely delete unassigned shards to free up some shards as its single node cluster?

Q. Can I changed the settings from allocating 2 shards (1 primary and 1 replica) to 1 primary shard being its a single server for each index online?

Q. If I have to keep one year of indices, Is below calculation correct?

15 indices daily with one primary shard * 365 days = 5475 total shards (or say 6000 for round off)

Q. Can I set 6000 shards as shard limit for this node so that I will never face this mentioned shard issue?

Thanks,

Hi Team,

can someone please reply.

Thanks,

What is the output from the _cluster/stats?pretty&human API?

We will, but please have patience :slight_smile: There are no SLAs on these forums.

That sounds like a very bad idea as you already have too many shards in the cluster. Please read this blog post for some practical guidance. I would recommend you switch to using monthly indices in order to reduce the shard count if you are keeping data that long.

Hi @warkolm, Thanks for your reply. Sure

Here is the output

{
  "_nodes" : {
    "total" : 1,
    "successful" : 1,
    "failed" : 0
  },
  "cluster_name" : "elastic",
  "cluster_uuid" : "W0ZPho4SZ4qng",
  "timestamp" : 1628047609591,
  "status" : "yellow",
  "indices" : {
    "count" : 511,
    "shards" : {
      "total" : 511,
      "primaries" : 511,
      "replication" : 0.0,
      "index" : {
        "shards" : {
          "min" : 1,
          "max" : 1,
          "avg" : 1.0
        },
        "primaries" : {
          "min" : 1,
          "max" : 1,
          "avg" : 1.0
        },
        "replication" : {
          "min" : 0.0,
          "max" : 0.0,
          "avg" : 0.0
        }
      }
    },
    "docs" : {
      "count" : 59629959,
      "deleted" : 6270181
    },
    "store" : {
      "size" : "22gb",
      "size_in_bytes" : 23697234465
    },
    "fielddata" : {
      "memory_size" : "212.2kb",
      "memory_size_in_bytes" : 217296,
      "evictions" : 0
    },
    "query_cache" : {
      "memory_size" : "2.8mb",
      "memory_size_in_bytes" : 3000910,
      "total_count" : 3542060,
      "hit_count" : 67088,
      "miss_count" : 3474972,
      "cache_size" : 884,
      "cache_count" : 8562,
      "evictions" : 7678
    },
    "completion" : {
      "size" : "0b",
      "size_in_bytes" : 0
    },
    "segments" : {
      "count" : 3060,
      "memory" : "94.8mb",
      "memory_in_bytes" : 99447725,
      "terms_memory" : "63.5mb",
      "terms_memory_in_bytes" : 66622780,
      "stored_fields_memory" : "8.5mb",
      "stored_fields_memory_in_bytes" : 9000424,
      "term_vectors_memory" : "0b",
      "term_vectors_memory_in_bytes" : 0,
      "norms_memory" : "5.3mb",
      "norms_memory_in_bytes" : 5584448,
      "points_memory" : "6.9mb",
      "points_memory_in_bytes" : 7338065,
      "doc_values_memory" : "10.3mb",
      "doc_values_memory_in_bytes" : 10902008,
      "index_writer_memory" : "12.7mb",
      "index_writer_memory_in_bytes" : 13416120,
      "version_map_memory" : "0b",
      "version_map_memory_in_bytes" : 0,
      "fixed_bit_set" : "5.1mb",
      "fixed_bit_set_memory_in_bytes" : 5400104,
      "max_unsafe_auto_id_timestamp" : 1627879837696,
      "file_sizes" : { }
    }
  },
  "nodes" : {
    "count" : {
      "total" : 1,
      "coordinating_only" : 0,
      "data" : 1,
      "ingest" : 1,
      "master" : 1,
      "ml" : 1,
      "voting_only" : 0
    },
    "versions" : [
      "7.4.0"
    ],
    "os" : {
      "available_processors" : 2,
      "allocated_processors" : 2,
      "names" : [
        {
          "name" : "Linux",
          "count" : 1
        }
      ],
      "pretty_names" : [
        {
          "pretty_name" : "CentOS Linux 7 (Core)",
          "count" : 1
        }
      ],
      "mem" : {
        "total" : "7.6gb",
        "total_in_bytes" : 8201400320,
        "free" : "139.6mb",
        "free_in_bytes" : 146436096,
        "used" : "7.5gb",
        "used_in_bytes" : 8054964224,
        "free_percent" : 2,
        "used_percent" : 98
      }
    },
    "process" : {
      "cpu" : {
        "percent" : 29
      },
      "open_file_descriptors" : {
        "min" : 3190,
        "max" : 3190,
        "avg" : 3190
      }
    },
    "jvm" : {
      "max_uptime" : "32.4d",
      "max_uptime_in_millis" : 2807599834,
      "versions" : [
        {
          "version" : "13",
          "vm_name" : "OpenJDK 64-Bit Server VM",
          "vm_version" : "13+33",
          "vm_vendor" : "AdoptOpenJDK",
          "bundled_jdk" : true,
          "using_bundled_jdk" : true,
          "count" : 1
        }
      ],
      "mem" : {
        "heap_used" : "1.1gb",
        "heap_used_in_bytes" : 1278490720,
        "heap_max" : "1.8gb",
        "heap_max_in_bytes" : 2033582080
      },
      "threads" : 169
    },
    "fs" : {
      "total" : "63.9gb",
      "total_in_bytes" : 68707921920,
      "free" : "14.3gb",
      "free_in_bytes" : 15459770368,
      "available" : "14.3gb",
      "available_in_bytes" : 15459770368
    },
    "plugins" : [ ],
    "network_types" : {
      "transport_types" : {
        "security4" : 1
      },
      "http_types" : {
        "security4" : 1
      }
    },
    "discovery_types" : {
      "zen" : 1
    },
    "packaging_types" : [
      {
        "flavor" : "default",
        "type" : "rpm",
        "count" : 1
      }
    ]
  }
}

Hi @Christian_Dahlqvist, Thanks for your reply.

If I change in /etc/logstash/conf.d/logstash.conf from index => "%{type}-%{+YYYY.MM.dd}" to index => "%{type}-%{+YYYY.MM}" then will it create index on monthly basis and not daily or I have to use ILM here?.

current config in logstash.conf for ping_server index.

  if [log_type] == "ping_server" {
  elasticsearch {
    hosts => ['http://10.1.1.1:9200']
    index => "%{type}-%{+YYYY.MM.dd}"
        user => elastic
    password => xxxx
      }
 }

Thanks,

You've got 500+ shards on a single node with only 22GB of data in those shards. You really need to reduce your shard count.

That's EOL so please update ASAP.

Hi @Christian_Dahlqvist, Can you please confirm the way to change from daily to something longer from the mentioned two ways?

The two ways you mentioned seem fine.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.