Metricbeat - Limit of total fields [1000] has been exceeded-urgent

Hello,
Can u help me I have problem.
I am having errors trying to ingest system metrics (system module) .The logs are ingested via a common beats pipeline running having the following configuration:
logstashPipeline:
es.conf: |
filter {

      mutate {
               add_field => { "index_name" => "%{[agent][type]}-%{[agent][version]}-na-default" }
      }
     if [agent][type] == "filebeat" {
        mutate {
                 remove_field => [ "host" ]
        }
        if [kubernetes][labels][parse] == "json" and [message] {
          json {
              source => "message"
              skip_on_invalid_json => true
          }
        }
      }
     else if [agent][type] == "metricbeat" and [kafka][topic][name] =~ /(REC.EBS.IRLLISTFROMIRLTOBROKER_ERROR|REC.EBS.IRLLISTFROMIRLTOBROKER|REC.EBS.IRLLISTFROMIRLTOBROKER_RETRY)/ {
       mutate {
        add_field => { "project" => "ms-broker" }
        }
      }
    }
    output { elasticsearch {
      hosts => ["https://elasticsearch.x.x.x:443"]
      user => '${ES_USER_LOGSTASH}'
      password => '${ES_PASSWORD_LOGSTASH}'
      index => "%{[index_name]}"
      ssl_certificate_verification => false
      manage_template => false
      action => "create"
      }
    }

The error I get is:
Limit of total fields [1000] has been exceeded while adding new fields
I've applied the following:
PUT metricbeat*/_settings
{
"index.mapping.total_fields.limit": 2000
}
However the error is persistent.
Error log on logstash and it is odd that I see fields such as ios while the system is
15:27:05.131 [[main]>worker3] WARN logstash.outputs.elasticsearch - Could not index event to Elasticsearch. status: 400, action: ["create", {:_id=>nil, :_index=>"metricbeat-8.6.2-na-default", :routing=>nil}, {"kubernetes.cluster"=>{"name"=>"Big_data_DEV_TEST"}, "orchestrator.cluster"=>{"name"=>"Big_data_DEV_TEST"}, "service"=>{"type"=>"kafka", "address"=>"cp-kafka-2.cp-kafka-headless.bigdata-recette-eventrepo:9092"}, "ecs"=>{"version"=>"8.0.0"}, "kubernetes"=>{"pod"=>{"uid"=>"72600eda-1e88-412b-a837-ad7d62005e41", "ip"=>"10.20.177.14", "name"=>"cp-kafka-2"}, "namespace_uid"=>"8361590a-04b7-46f7-b7a1-b787e7b8f0d0", "namespace_labels"=>{"information-system"=>"bigdata", "application"=>"eventrepo", "environment"=>"recette", "kubernetes_io/metadata_name"=>"bigdata-recette-eventrepo"}, "labels"=>{"app"=>"cp-kafka", "release"=>"rec-cp-kafka", "controller-revision-hash"=>"cp-kafka-6f8d6d6f84", "statefulset_kubernetes_io/pod-name"=>"cp-kafka-2"}, "node"=>{"labels"=>{"node_kubernetes_io/node"=>"", "kubernetes_io/role"=>"node", "kubernetes_io/hostname"=>"karbon-big-data-dev-test-72081b-reis-worker-k8s-worker-0", "kubernetes_io/arch"=>"amd64", "beta_kubernetes_io/os"=>"linux", "kubernetes_io/os"=>"linux", "beta_kubernetes_io/arch"=>"amd64"}, "uid"=>"42795eda-ca7e-4340-8cd1-fdf09d81c47c", "name"=>"karbon-big-data-dev-test-72081b-reis-worker-k8s-worker-0", "hostname"=>"karbon-big-data-dev-test-72081b-reis-worker-k8s-worker-0"}, "statefulset"=>{"name"=>"cp-kafka"}, "container"=>{"name"=>"cp-kafka-broker"}, "namespace"=>"bigdata-recette-eventrepo"}, "@timestamp"=>2023-08-28T12:56:16.786Z, "metricset"=>{"period"=>10000, "name"=>"partition"}, "host"=>{"name"=>"karbon-big-data-dev-test-72081b-reis-worker-k8s-worker-0", "containerized"=>true, "os"=>{"type"=>"linux", "codename"=>"focal", "version"=>"20.04.5 LTS (Focal Fossa)", "family"=>"debian", "kernel"=>"3.10.0-1160.71.1.el7.x86_64", "name"=>"Ubuntu", "platform"=>"ubuntu"}, "architecture"=>"x86_64", "mac"=>["50-6B-8D-81-88-24", "EE-EE-EE-EE-EE-EE"], "ip"=>["172.28.101.188", "fe80::526b:8dff:fe81:8824", "fe80::ecee:eeff:feee:eeee", "fe80::ecee:eeff:feee:eeee", "fe80::ecee:eeff:feee:eeee", "fe80::ecee:eeff:feee:eeee", "fe80::ecee:eeff:feee:eeee", "fe80::ecee:eeff:feee:eeee", "fe80::ecee:eeff:feee:eeee"], "hostname"=>"karbon-big-data-dev-test-72081b-reis-worker-k8s-worker-0"}, "application"=>{"env"=>"%{[kubernetes.labels.env]:datalake-dev-recette}", "name"=>"%{[kubernetes.labels.application]:na}"}, "container"=>{"runtime"=>"containerd", "image"=>{"name"=>"confluentinc/cp-kafka:5.5.1"}, "id"=>"91687f20a9b0407f826ba091d36c76beea711e3691a321e3a761fd351a1bdc6b"}, "agent"=>{"type"=>"metricbeat", "version"=>"8.6.2", "ephemeral_id"=>"937ac831-01ac-4d4a-9df4-6c3388a33df0", "id"=>"bc942d07-d0ed-4e54-a273-9c4fa770d9b7", "name"=>"karbon-big-data-dev-test-72081b-reis-worker-k8s-worker-0"}, "index_name"=>"metricbeat-8.6.2-na-default", "kafka"=>{"broker"=>{"address"=>"cp-kafka-2.cp-kafka-headless.bigdata-recette-eventrepo:9092", "id"=>2}, "topic"=>{"name"=>"REC.EBS.IRLLISTFROMIRLTOBROKER_ERROR"}, "partition"=>{"offset"=>{"newest"=>3168, "oldest"=>3168}, "topic_broker_id"=>"0-REC.EBS.IRLLISTFROMIRLTOBROKER_ERROR-2", "id"=>0, "topic_id"=>"0-REC.EBS.IRLLISTFROMIRLTOBROKER_ERROR", "partition"=>{"leader"=>2, "replica"=>2, "is_leader"=>true, "insync_replica"=>true}}}, "tags"=>["beats_input_raw_event"], "@version"=>"1", "event"=>{"duration"=>294896743, "dataset"=>"kafka.partition", "module"=>"kafka"}, "project"=>"ms-broker"}], response: {"create"=>{"_index"=>".ds-metricbeat-8.6.2-na-default-2023.08.28-000464", "_id"=>"cgjBPIoBZiO6A_5DcaCG", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"Limit of total fields [10000] has been exceeded while adding new fields [1]"}}}}

The error says 10000 not 1000 which means you have many many many fields

You can set the fields to higher but you really need to understand why you have "Field Explosion" (so many fields) in the first place ...

yes as first i try it to limit it at 2000 then 10000 but same problem i dont know why !

You need to look at the mapping and figure out why you have soo many fields.

If you do Kibana Dev Tools

GET metricbeat-8.6.2-na-default

You will be able to see all the fields

You would need to set the setting to like 12000 not 10000

But your main problem is so many fields....

This often happens as a result of a field name having some sort of identifier in the fields name ... Like container id as part of the name...

You need to look at the field name to figure out what is happening... perhaps there is something else writing to that index

Something looks odd about this as well, but I don't think that is really the issue

You might want to read this thread, which started off about codecs, but ended up about a mapping explosion.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.