Kafka Integration Installation Error - "Limit of total dimension fields [16] has been exceeded"

Hi Everyone,

I am currently trying to install the Kafka integration via the Fleet UI in Kibana, but I've encountered an issue that I hope someone can shed some light on.

The error message I'm receiving is as follows:

Error installing kafka 1.7.0: illegal_argument_exception: [illegal_argument_exception] Reason: Limit of total dimension fields [16] has been exceeded.

In an attempt to resolve this issue, I checked the index.mapping.dimension_fields.limit setting and found that the limit is indeed set to 16, as highlighted in the error message. However, I was unable to find a way to update or increase this limit.

Elastic Stack Version: 8.5.4

Regards,

Hello,
Never used Kafka integration before, but you can update this setting on your index.

just follow the examples. Keep in mind that elasticsearch have defaults for settings like these for a reason, increasing it might reduce the performance.

Regards.

Thanks coezdemir for the help, here is the correct doc link: Time series index settings | Elasticsearch Guide [8.5] | Elastic

Thank you all.

I have updated the 'index.mapping.dimension_fields.limit' in the '.fleet_globals-1' component template, since all fleet-managed index templates inherit settings from this template. However, despite these changes, I am still encountering an error when attempting to install Kafka integration.

Error installing kafka 1.7.0: illegal_argument_exception: [illegal_argument_exception] Reason: Limit of total dimension fields [16] has been exceeded

Evidently, the new limit (set to 21) doesn't seem to be recognized.

In an attempt to resolve this, I tried to apply the same for the @custom components specifically for the Kafka integration, but unfortunately, the error persists.

Could you provide guidance on how to globally update this setting for all new indices in the cluster? Your expertise and help would be greatly appreciated.

Could you share the request that you tried to update .fleet_globals-1 and what is in the current settings if you query it?

I tried to set it and seems to work.

PUT _component_template/.fleet_globals-1
{
  "template": {
    "settings": {
      "index": {
        "mapping": {
          "dimension_fields": {
            "limit": 21
          }
        }
      }
    }
  }
}

I updated the settings using the 'index management' UI in kibana. Kindly find the current settings for .fleet_globals-1:

GET _component_template/.fleet_globals-1
{
  "component_templates": [
    {
      "name": ".fleet_globals-1",
      "component_template": {
        "template": {
          "settings": {
            "index": {
              "mapping": {
                "dimension_fields": {
                  "limit": "21"
                }
              }
            }
          },
          "mappings": {
            "_meta": {
              "managed_by": "fleet",
              "managed": true
            },
            "dynamic_templates": [
              {
                "strings_as_keyword": {
                  "mapping": {
                    "ignore_above": 1024,
                    "type": "keyword"
                  },
                  "match_mapping_type": "string"
                }
              }
            ],
            "date_detection": false
          }
        },
        "_meta": {
          "managed_by": "fleet",
          "managed": true
        }
      }
    }
  ]
}

Okay, this looks good.
Could you share the full error message from kibana logs when you are trying to install the Kafka package?

kindly find the error logs below:

[2023-07-27T11:09:54.138+00:00][WARN ][plugins.fleet] Failure to install package [kafka]: [ResponseError: illegal_argument_exception: [illegal_argument_exception] Reason: Limit of total dimension fields [16] has been exceeded]
[2023-07-27T11:09:54.139+00:00][ERROR][plugins.fleet] uninstalling kafka-1.7.0 after error installing: [ResponseError: illegal_argument_exception: [illegal_argument_exception] Reason: Limit of total dimension fields [16] has been exceeded]
[2023-07-27T11:09:55.299+00:00][ERROR][plugins.fleet] Error: error deleting component template metrics-kafka.broker@package
    at deleteComponentTemplate (/usr/share/kibana/x-pack/plugins/fleet/server/services/epm/packages/remove.js:211:13)
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async Promise.all (index 0)
    at deleteAssets (/usr/share/kibana/x-pack/plugins/fleet/server/services/epm/packages/remove.js:177:5)
    at removeInstallation (/usr/share/kibana/x-pack/plugins/fleet/server/services/epm/packages/remove.js:67:3)
    at handleInstallPackageFailure (/usr/share/kibana/x-pack/plugins/fleet/server/services/epm/packages/install.js:203:7)
    at /usr/share/kibana/x-pack/plugins/fleet/server/services/epm/packages/install.js:389:7
    at installPackageFromRegistry (/usr/share/kibana/x-pack/plugins/fleet/server/services/epm/packages/install.js:358:12)
    at installPackage (/usr/share/kibana/x-pack/plugins/fleet/server/services/epm/packages/install.js:577:22)
    at ensureInstalledPackage (/usr/share/kibana/x-pack/plugins/fleet/server/services/epm/packages/install.js:138:25)
    at PackagePolicyClientWithAuthz.create (/usr/share/kibana/x-pack/plugins/fleet/server/services/package_policy.js:115:9)
    at createPackagePolicyHandler (/usr/share/kibana/x-pack/plugins/fleet/server/routes/package_policy/handlers.js:241:27)
    at Router.handle (/usr/share/kibana/node_modules/@kbn/core-http-router-server-internal/target_node/src/router.js:163:30)
    at handler (/usr/share/kibana/node_modules/@kbn/core-http-router-server-internal/target_node/src/router.js:124:50)
    at exports.Manager.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/toolkit.js:60:28)
    at Object.internals.handler (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:46:20)
    at exports.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:31:20)
    at Request._lifecycle (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:371:32)
    at Request._execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:281:9)
[2023-07-27T11:09:55.719+00:00][ERROR][plugins.fleet] failed to uninstall or rollback package after installation error Error: Saved object [epm-packages/kafka] not found

Unfortunately the logs don't have more information, this error happens when Fleet tries to rollback the installation of kafka by deleting the installed component templates.

From what I saw in my local testing, the stream metrics-kafka.consumergroup has more than 16 dimension fields.

One more thing to try is uninstall Kafka completely and try to install again, the .fleet_globals-1 component template setting should be enough.
Commands from console:

DELETE kbn:/api/fleet/epm/packages/kafka/1.7.0
{
  "force": true
}

POST kbn:/api/fleet/epm/packages/kafka/1.7.0
{
  "force": true
}

If this doesn't work, you could try to upgrade the stack to the latest version to see if it helps.

Thank you, Júlia. Unfortunately, the solution didn't work for me. For now, I will use the filebeat option. I will try integrating the Elastic Agent again after upgrading the stack.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.