Logstash not able to consume Kafka input

I'm playing a bit with the latest versions of Logstash and Kafka but I can't get the Kafka input to work.

Here a brief summary of my setup:

  1. I'm using Docker Compose with apache/kafka:3.9.0 and logstash:8.16.1 Docker images.
  2. The Kafka broker is reachable by Logstash, indeed, the Kafka output works as expected with the generator and http inputs.

This is my logstash.conf

input {
  generator {
    count => 10
    message => "Hello, World!"
  }
  http {
    host => "0.0.0.0"
    port => 8080
    codec => json {
        target => "[data]"
    }
  }
  kafka {
    bootstrap_servers => "kafka:19092"
    codec => "json"
    topics => ["logstash-input"]
    group_id => "logstash_group"
    client_id => "logstash_consumer"
    auto_offset_reset => "earliest"
  }
}

output {
  stdout {
    codec => "rubydebug"
  }
  kafka {
    bootstrap_servers => "kafka:19092"
    codec => "json"
    topic_id => "logstash-output"
  }
}

and this is the docker-compose.yaml:

services:
  kafka:
    image: "apache/kafka:3.9.0"
    hostname: "kafka"
    container_name: "kafka"
    restart: "always"
    labels:
      docker-hub: "https://hub.docker.com/r/apache/kafka"
      github: "https://github.com/apache/kafka"
      readme: "https://github.com/apache/kafka/blob/trunk/docker/examples/README.md#using-environment-variables"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: "broker,controller"
      KAFKA_LOG_DIRS: "/var/lib/kafka/data"
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "${KAFKA_LISTENER_SECURITY_PROTOCOL_MAP:-EXTERNAL:PLAINTEXT,INTERNAL:PLAINTEXT,CONTROLLER:PLAINTEXT}"
      KAFKA_ADVERTISED_LISTENERS: "${KAFKA_ADVERTISED_LISTENERS:-EXTERNAL://localhost:9092,INTERNAL://kafka:19092}"
      KAFKA_LISTENERS: "${KAFKA_LISTENERS:-EXTERNAL://0.0.0.0:9092,INTERNAL://kafka:19092,CONTROLLER://kafka:29092}"
      KAFKA_INTER_BROKER_LISTENER_NAME: "${KAFKA_INTER_BROKER_LISTENER_NAME:-INTERNAL}"
      KAFKA_CONTROLLER_LISTENER_NAMES: "${KAFKA_CONTROLLER_LISTENER_NAMES:-CONTROLLER}"
      KAFKA_CONTROLLER_QUORUM_VOTERS: "${KAFKA_CONTROLLER_QUORUM_VOTERS:-1@kafka:29092}"
    networks:
      - logstash-network
    ports:
      - "9092:9092"
    volumes:
      - kafka-secrets:/etc/kafka/secrets
      - kafka-data:/var/lib/kafka/data
      - kafka-config:/mnt/shared/config

  logstash:
    image: "logstash:8.16.1"
    hostname: "logstash"
    container_name: "logstash"
    restart: "always"
    labels:
      docker-hub: "https://hub.docker.com/_/logstash"
      github: "https://github.com/elastic/logstash"
      readme: "https://www.elastic.co/guide/en/logstash/current/docker-config.html"
    environment:
      LOGSTASH_LOG_LEVEL: "debug"
      MONITORING_ENABLED: "${MONITORING_ENABLED:-false}"
      XPACK_MONITORING_ENABLED: "${XPACK_MONITORING_ENABLED:-false}"
      PIPELINE_ECS__COMPATIBILITY: "${PIPELINE_ECS__COMPATIBILITY:-disabled}"
    depends_on:
      - kafka
    networks:
      - logstash-network
    ports:
      - "9600:9600"
      - "8080:8080"
    volumes:
      - ./volumes/usr/share/logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro

volumes:
  kafka-secrets:
    name: "kafka-secrets"
  kafka-data:
    name: "kafka-data"
  kafka-config:
    name: "kafka-config"

networks:
  logstash-network:
    name: "logstash-network"

The Logstash logs are as follows:

docker compose logs -f logstash
logstash  | 2024/12/11 09:29:52 Setting 'xpack.monitoring.enabled' from environment.
logstash  | 2024/12/11 09:29:52 Setting 'pipeline.ecs_compatibility' from environment.
logstash  | Using bundled JDK: /usr/share/logstash/jdk
logstash  | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash  | [2024-12-11T09:30:28,196][WARN ][deprecation.logstash.settings] The setting `http.host` is a deprecated alias for `api.http.host` and will be removed in a future release of Logstash. Please use api.http.host instead
logstash  | [2024-12-11T09:30:28,217][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
logstash  | [2024-12-11T09:30:28,218][WARN ][deprecation.logstash.runner] 'pipeline.buffer.type' setting is not explicitly defined.Before moving to 9.x set it to 'heap' and tune heap size upward, or set it to 'direct' to maintain existing behavior.
logstash  | [2024-12-11T09:30:28,220][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.16.1", "jruby.version"=>"jruby 9.4.9.0 (3.1.4) 2024-11-04 547c6b150e OpenJDK 64-Bit Server VM 21.0.5+11-LTS on 21.0.5+11-LTS +indy +jit [x86_64-linux]"}
logstash  | [2024-12-11T09:30:28,225][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
logstash  | [2024-12-11T09:30:28,230][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
logstash  | [2024-12-11T09:30:28,230][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
logstash  | [2024-12-11T09:30:28,245][INFO ][logstash.settings        ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
logstash  | [2024-12-11T09:30:28,248][INFO ][logstash.settings        ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
logstash  | [2024-12-11T09:30:28,515][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"1c5a6e5f-efa0-42eb-8673-a3d6559c1dcc", :path=>"/usr/share/logstash/data/uuid"}
logstash  | [2024-12-11T09:30:29,216][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
logstash  | [2024-12-11T09:30:29,830][INFO ][org.reflections.Reflections] Reflections took 186 ms to scan 1 urls, producing 149 keys and 523 values
logstash  | [2024-12-11T09:30:30,570][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: disabled` setting. All plugins in this pipeline will default to `ecs_compatibility => disabled` unless explicitly configured otherwise.
logstash  | [2024-12-11T09:30:30,671][INFO ][org.apache.kafka.clients.producer.ProducerConfig][main] Idempotence will be disabled because acks is set to 1, not set to 'all'.
logstash  | [2024-12-11T09:30:30,673][INFO ][org.apache.kafka.clients.producer.ProducerConfig][main] ProducerConfig values: 
logstash  |     acks = 1
logstash  |     auto.include.jmx.reporter = true
logstash  |     batch.size = 16384
logstash  |     bootstrap.servers = [kafka:19092]
logstash  |     buffer.memory = 33554432
logstash  |     client.dns.lookup = use_all_dns_ips
logstash  |     client.id = logstash
logstash  |     compression.type = none
logstash  |     connections.max.idle.ms = 540000
logstash  |     delivery.timeout.ms = 120000
logstash  |     enable.idempotence = false
logstash  |     interceptor.classes = []
logstash  |     key.serializer = class org.apache.kafka.common.serialization.StringSerializer
logstash  |     linger.ms = 0
logstash  |     max.block.ms = 60000
logstash  |     max.in.flight.requests.per.connection = 5
logstash  |     max.request.size = 1048576
logstash  |     metadata.max.age.ms = 300000
logstash  |     metadata.max.idle.ms = 300000
logstash  |     metric.reporters = []
logstash  |     metrics.num.samples = 2
logstash  |     metrics.recording.level = INFO
logstash  |     metrics.sample.window.ms = 30000
logstash  |     partitioner.adaptive.partitioning.enable = true
logstash  |     partitioner.availability.timeout.ms = 0
logstash  |     partitioner.class = null
logstash  |     partitioner.ignore.keys = false
logstash  |     receive.buffer.bytes = 32768
logstash  |     reconnect.backoff.max.ms = 50
logstash  |     reconnect.backoff.ms = 50
logstash  |     request.timeout.ms = 40000
logstash  |     retries = 2147483647
logstash  |     retry.backoff.ms = 100
logstash  |     sasl.client.callback.handler.class = null
logstash  |     sasl.jaas.config = null
logstash  |     sasl.kerberos.kinit.cmd = /usr/bin/kinit
logstash  |     sasl.kerberos.min.time.before.relogin = 60000
logstash  |     sasl.kerberos.service.name = null
logstash  |     sasl.kerberos.ticket.renew.jitter = 0.05
logstash  |     sasl.kerberos.ticket.renew.window.factor = 0.8
logstash  |     sasl.login.callback.handler.class = null
logstash  |     sasl.login.class = null
logstash  |     sasl.login.connect.timeout.ms = null
logstash  |     sasl.login.read.timeout.ms = null
logstash  |     sasl.login.refresh.buffer.seconds = 300
logstash  |     sasl.login.refresh.min.period.seconds = 60
logstash  |     sasl.login.refresh.window.factor = 0.8
logstash  |     sasl.login.refresh.window.jitter = 0.05
logstash  |     sasl.login.retry.backoff.max.ms = 10000
logstash  |     sasl.login.retry.backoff.ms = 100
logstash  |     sasl.mechanism = GSSAPI
logstash  |     sasl.oauthbearer.clock.skew.seconds = 30
logstash  |     sasl.oauthbearer.expected.audience = null
logstash  |     sasl.oauthbearer.expected.issuer = null
logstash  |     sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
logstash  |     sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
logstash  |     sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
logstash  |     sasl.oauthbearer.jwks.endpoint.url = null
logstash  |     sasl.oauthbearer.scope.claim.name = scope
logstash  |     sasl.oauthbearer.sub.claim.name = sub
logstash  |     sasl.oauthbearer.token.endpoint.url = null
logstash  |     security.protocol = PLAINTEXT
logstash  |     security.providers = null
logstash  |     send.buffer.bytes = 131072
logstash  |     socket.connection.setup.timeout.max.ms = 30000
logstash  |     socket.connection.setup.timeout.ms = 10000
logstash  |     ssl.cipher.suites = null
logstash  |     ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
logstash  |     ssl.endpoint.identification.algorithm = https
logstash  |     ssl.engine.factory.class = null
logstash  |     ssl.key.password = null
logstash  |     ssl.keymanager.algorithm = SunX509
logstash  |     ssl.keystore.certificate.chain = null
logstash  |     ssl.keystore.key = null
logstash  |     ssl.keystore.location = null
logstash  |     ssl.keystore.password = null
logstash  |     ssl.keystore.type = JKS
logstash  |     ssl.protocol = TLSv1.3
logstash  |     ssl.provider = null
logstash  |     ssl.secure.random.implementation = null
logstash  |     ssl.trustmanager.algorithm = PKIX
logstash  |     ssl.truststore.certificates = null
logstash  |     ssl.truststore.location = null
logstash  |     ssl.truststore.password = null
logstash  |     ssl.truststore.type = JKS
logstash  |     transaction.timeout.ms = 60000
logstash  |     transactional.id = null
logstash  |     value.serializer = class org.apache.kafka.common.serialization.StringSerializer
logstash  | 
logstash  | [2024-12-11T09:30:30,822][INFO ][org.apache.kafka.common.utils.AppInfoParser][main] Kafka version: 3.4.1
logstash  | [2024-12-11T09:30:30,822][INFO ][org.apache.kafka.common.utils.AppInfoParser][main] Kafka commitId: 8a516edc2755df89
logstash  | [2024-12-11T09:30:30,823][INFO ][org.apache.kafka.common.utils.AppInfoParser][main] Kafka startTimeMs: 1733909430819
logstash  | [2024-12-11T09:30:30,861][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x6ff64149 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:139 run>"}
logstash  | [2024-12-11T09:30:31,501][INFO ][org.apache.kafka.clients.Metadata][main] [Producer clientId=logstash] Cluster ID: 5L6g3nShT-eMCtK--X86sw
logstash  | [2024-12-11T09:30:31,924][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>1.06}
logstash  | [2024-12-11T09:30:32,059][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
logstash  | [2024-12-11T09:30:32,062][INFO ][logstash.inputs.http     ][main][f3867ee436e85d6522a570d51b2bb2627cd0d058c59194ac79cb3813ebc37db8] Starting http input listener {:address=>"0.0.0.0:8080", :ssl_enabled=>false}
logstash  | [2024-12-11T09:30:32,071][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
logstash  | [2024-12-11T09:30:32,122][INFO ][org.apache.kafka.clients.consumer.ConsumerConfig][main][1cc47ee71c50b28c8d2ef3535bea0308e2047cb3ebc72456a3d39f1ebf27b3e3] ConsumerConfig values: 
logstash  |     allow.auto.create.topics = true
logstash  |     auto.commit.interval.ms = 5000
logstash  |     auto.include.jmx.reporter = true
logstash  |     auto.offset.reset = earliest
logstash  |     bootstrap.servers = [kafka:19092]
logstash  |     check.crcs = true
logstash  |     client.dns.lookup = use_all_dns_ips
logstash  |     client.id = logstash_consumer-0
logstash  |     client.rack = 
logstash  |     connections.max.idle.ms = 540000
logstash  |     default.api.timeout.ms = 60000
logstash  |     enable.auto.commit = true
logstash  |     exclude.internal.topics = true
logstash  |     fetch.max.bytes = 52428800
logstash  |     fetch.max.wait.ms = 500
logstash  |     fetch.min.bytes = 1
logstash  |     group.id = logstash_group
logstash  |     group.instance.id = null
logstash  |     heartbeat.interval.ms = 3000
logstash  |     interceptor.classes = []
logstash  |     internal.leave.group.on.close = true
logstash  |     internal.throw.on.fetch.stable.offset.unsupported = false
logstash  |     isolation.level = read_uncommitted
logstash  |     key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
logstash  |     max.partition.fetch.bytes = 1048576
logstash  |     max.poll.interval.ms = 300000
logstash  |     max.poll.records = 500
logstash  |     metadata.max.age.ms = 300000
logstash  |     metric.reporters = []
logstash  |     metrics.num.samples = 2
logstash  |     metrics.recording.level = INFO
logstash  |     metrics.sample.window.ms = 30000
logstash  |     partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor]
logstash  |     receive.buffer.bytes = 32768
logstash  |     reconnect.backoff.max.ms = 50
logstash  |     reconnect.backoff.ms = 50
logstash  |     request.timeout.ms = 40000
logstash  |     retry.backoff.ms = 100
logstash  |     sasl.client.callback.handler.class = null
logstash  |     sasl.jaas.config = null
logstash  |     sasl.kerberos.kinit.cmd = /usr/bin/kinit
logstash  |     sasl.kerberos.min.time.before.relogin = 60000
logstash  |     sasl.kerberos.service.name = null
logstash  |     sasl.kerberos.ticket.renew.jitter = 0.05
logstash  |     sasl.kerberos.ticket.renew.window.factor = 0.8
logstash  |     sasl.login.callback.handler.class = null
logstash  |     sasl.login.class = null
logstash  |     sasl.login.connect.timeout.ms = null
logstash  |     sasl.login.read.timeout.ms = null
logstash  |     sasl.login.refresh.buffer.seconds = 300
logstash  |     sasl.login.refresh.min.period.seconds = 60
logstash  |     sasl.login.refresh.window.factor = 0.8
logstash  |     sasl.login.refresh.window.jitter = 0.05
logstash  |     sasl.login.retry.backoff.max.ms = 10000
logstash  |     sasl.login.retry.backoff.ms = 100
logstash  |     sasl.mechanism = GSSAPI
logstash  |     sasl.oauthbearer.clock.skew.seconds = 30
logstash  |     sasl.oauthbearer.expected.audience = null
logstash  |     sasl.oauthbearer.expected.issuer = null
logstash  |     sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
logstash  |     sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
logstash  |     sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
logstash  |     sasl.oauthbearer.jwks.endpoint.url = null
logstash  |     sasl.oauthbearer.scope.claim.name = scope
logstash  |     sasl.oauthbearer.sub.claim.name = sub
logstash  |     sasl.oauthbearer.token.endpoint.url = null
logstash  |     security.protocol = PLAINTEXT
logstash  |     security.providers = null
logstash  |     send.buffer.bytes = 131072
logstash  |     session.timeout.ms = 10000
logstash  |     socket.connection.setup.timeout.max.ms = 30000
logstash  |     socket.connection.setup.timeout.ms = 10000
logstash  |     ssl.cipher.suites = null
logstash  |     ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
logstash  |     ssl.endpoint.identification.algorithm = https
logstash  |     ssl.engine.factory.class = null
logstash  |     ssl.key.password = null
logstash  |     ssl.keymanager.algorithm = SunX509
logstash  |     ssl.keystore.certificate.chain = null
logstash  |     ssl.keystore.key = null
logstash  |     ssl.keystore.location = null
logstash  |     ssl.keystore.password = null
logstash  |     ssl.keystore.type = JKS
logstash  |     ssl.protocol = TLSv1.3
logstash  |     ssl.provider = null
logstash  |     ssl.secure.random.implementation = null
logstash  |     ssl.trustmanager.algorithm = PKIX
logstash  |     ssl.truststore.certificates = null
logstash  |     ssl.truststore.location = null
logstash  |     ssl.truststore.password = null
logstash  |     ssl.truststore.type = JKS
logstash  |     value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
logstash  | 
logstash  | {
logstash  |     "@timestamp" => 2024-12-11T09:30:32.104005487Z,
logstash  |        "message" => "Hello, World!",
logstash  |       "@version" => "1",
logstash  |       "sequence" => 2,
logstash  |           "host" => "logstash"
logstash  | }
logstash  | {
logstash  |     "@timestamp" => 2024-12-11T09:30:32.084636528Z,
logstash  |        "message" => "Hello, World!",
logstash  |       "@version" => "1",
logstash  |       "sequence" => 1,
logstash  |           "host" => "logstash"
logstash  | }
logstash  | {
logstash  |     "@timestamp" => 2024-12-11T09:30:32.109880187Z,
logstash  |        "message" => "Hello, World!",
logstash  |       "@version" => "1",
logstash  |       "sequence" => 9,
logstash  |           "host" => "logstash"
logstash  | }
logstash  | {
logstash  |     "@timestamp" => 2024-12-11T09:30:32.109111260Z,
logstash  |        "message" => "Hello, World!",
logstash  |       "@version" => "1",
logstash  |       "sequence" => 8,
logstash  |           "host" => "logstash"
logstash  | }
logstash  | {
logstash  |     "@timestamp" => 2024-12-11T09:30:32.071784691Z,
logstash  |        "message" => "Hello, World!",
logstash  |       "@version" => "1",
logstash  |       "sequence" => 0,
logstash  |           "host" => "logstash"
logstash  | }
logstash  | {
logstash  |     "@timestamp" => 2024-12-11T09:30:32.108337834Z,
logstash  |        "message" => "Hello, World!",
logstash  |       "@version" => "1",
logstash  |       "sequence" => 7,
logstash  |           "host" => "logstash"
logstash  | }
logstash  | {
logstash  |     "@timestamp" => 2024-12-11T09:30:32.106613375Z,
logstash  |        "message" => "Hello, World!",
logstash  |       "@version" => "1",
logstash  |       "sequence" => 5,
logstash  |           "host" => "logstash"
logstash  | }
logstash  | {
logstash  |     "@timestamp" => 2024-12-11T09:30:32.107566808Z,
logstash  |        "message" => "Hello, World!",
logstash  |       "@version" => "1",
logstash  |       "sequence" => 6,
logstash  |           "host" => "logstash"
logstash  | }
logstash  | {
logstash  |     "@timestamp" => 2024-12-11T09:30:32.105772247Z,
logstash  |        "message" => "Hello, World!",
logstash  |       "@version" => "1",
logstash  |       "sequence" => 4,
logstash  |           "host" => "logstash"
logstash  | }
logstash  | {
logstash  |     "@timestamp" => 2024-12-11T09:30:32.104861116Z,
logstash  |        "message" => "Hello, World!",
logstash  |       "@version" => "1",
logstash  |       "sequence" => 3,
logstash  |           "host" => "logstash"
logstash  | }
logstash  | [2024-12-11T09:30:32,372][INFO ][org.apache.kafka.common.utils.AppInfoParser][main][1cc47ee71c50b28c8d2ef3535bea0308e2047cb3ebc72456a3d39f1ebf27b3e3] Kafka version: 3.4.1
logstash  | [2024-12-11T09:30:32,373][INFO ][org.apache.kafka.common.utils.AppInfoParser][main][1cc47ee71c50b28c8d2ef3535bea0308e2047cb3ebc72456a3d39f1ebf27b3e3] Kafka commitId: 8a516edc2755df89
logstash  | [2024-12-11T09:30:32,373][INFO ][org.apache.kafka.common.utils.AppInfoParser][main][1cc47ee71c50b28c8d2ef3535bea0308e2047cb3ebc72456a3d39f1ebf27b3e3] Kafka startTimeMs: 1733909432372
logstash  | [2024-12-11T09:30:32,382][INFO ][org.apache.kafka.clients.consumer.KafkaConsumer][main][1cc47ee71c50b28c8d2ef3535bea0308e2047cb3ebc72456a3d39f1ebf27b3e3] [Consumer clientId=logstash_consumer-0, groupId=logstash_group] Subscribed to topic(s): logstash-input
logstash  | [2024-12-11T09:30:32,507][WARN ][org.apache.kafka.clients.NetworkClient][main] [Producer clientId=logstash] Error while fetching metadata with correlation id 3 : {logstash-output=UNKNOWN_TOPIC_OR_PARTITION}
logstash  | [2024-12-11T09:30:32,557][INFO ][org.apache.kafka.clients.Metadata][main][1cc47ee71c50b28c8d2ef3535bea0308e2047cb3ebc72456a3d39f1ebf27b3e3] [Consumer clientId=logstash_consumer-0, groupId=logstash_group] Resetting the last seen epoch of partition logstash-input-0 to 0 since the associated topicId changed from null to XRNQAfrtQgCFI8an4_0WTw
logstash  | [2024-12-11T09:30:32,583][INFO ][org.apache.kafka.clients.Metadata][main][1cc47ee71c50b28c8d2ef3535bea0308e2047cb3ebc72456a3d39f1ebf27b3e3] [Consumer clientId=logstash_consumer-0, groupId=logstash_group] Cluster ID: 5L6g3nShT-eMCtK--X86sw
logstash  | [2024-12-11T09:30:32,610][INFO ][org.apache.kafka.clients.Metadata][main] [Producer clientId=logstash] Resetting the last seen epoch of partition logstash-output-0 to 0 since the associated topicId changed from null to 358RbPZ-Q_ymvmg1ifPNhg

Am I missing something in the Kafka input configuration?