Logstash not sending data to ELK

Hello all, I am struggling to get logstash setup and sending data over to ES. I am trying to have logstash act a syslog server and send that data over to ES. I do not see any logstash-* on the indices or data streams. I can see data coming into the logstash container with tcpdump, but can not find any eorrs that is stopping this.

logstash:8.8.0
elasticsearch:8.8.0
kibana:8.8.0

logstash.conf

input {
  beats {
    port => 5044
  }
  tcp {
    port => 5000
    type => syslog
  }
  udp{
    port => 514
    type => syslog
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}


#######################################################
#  Send logs to Elastic
#  Create separate indexes for stats and regular logs
#  using field defined in the filter transformation
#######################################################
output {
  elasticsearch {
    hosts => ["https://es01:9200", "https://es02:9200", "https://es03:9200"]
    #data_stream => "true"
    index => "logstash-%{+YYYY.MM.dd}"
    ssl_enabled => "true"
    cacert => "/usr/share/logstash/certs/ca/ca.crt"
    ssl_certificate => "/usr/share/logstash/certs/logstash/logstash.crt"
    ssl_key => "/usr/share/logstash/certs/logstash/logstash.pkcs8.key"
    ssl_verification_mode => "none"
    user => "logstash_internal"
    password => "x-pack-test-password"
  }
}

TCP dump from Host running the containers.

21:43:21.035717 IP 192.168.1.40.55631 > elkcluster.local.syslog: SYSLOG local4.debug, length: 247
21:43:21.035783 IP 192.168.1.40.55631 > 172.20.0.5.syslog: SYSLOG local4.debug, length: 247
21:43:21.035800 IP 192.168.1.40.55631 > 172.20.0.5.syslog: SYSLOG local4.debug, length: 247
21:43:21.040403 IP 192.168.1.40.55631 > elkcluster.local.syslog: SYSLOG local4.debug, length: 169
21:43:21.040451 IP 192.168.1.40.55631 > 172.20.0.5.syslog: SYSLOG local4.debug, length: 169
21:43:21.040459 IP 192.168.1.40.55631 > 172.20.0.5.syslog: SYSLOG local4.debug, length: 169

Docker IP for logstash: "IPAddress": "172.20.0.5",

Here are the startup logs for logstash.

Using bundled JDK: /usr/share/logstash/jdk
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2024-03-02T04:41:26,462][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[2024-03-02T04:41:26,467][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.8.0", "jruby.version"=>"jruby 9.3.10.0 (2.6.8) 2023-02-01 107b2e6697 OpenJDK 64-Bit Server VM 17.0.7+7 on 17.0.7+7 +indy +jit [x86_64-linux]"}
[2024-03-02T04:41:26,470][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[2024-03-02T04:41:26,480][INFO ][logstash.settings        ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2024-03-02T04:41:26,481][INFO ][logstash.settings        ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2024-03-02T04:41:26,700][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"410eb19a-c450-47bc-ae14-9f2839d94872", :path=>"/usr/share/logstash/data/uuid"}
[2024-03-02T04:41:27,250][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2024-03-02T04:41:27,493][INFO ][org.reflections.Reflections] Reflections took 116 ms to scan 1 urls, producing 132 keys and 464 values
[2024-03-02T04:41:27,828][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-03-02T04:41:27,872][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x704cd01d@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-03-02T04:41:28,477][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.6}
[2024-03-02T04:41:28,486][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2024-03-02T04:41:28,497][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2024-03-02T04:41:28,510][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2024-03-02T04:41:28,567][INFO ][org.logstash.beats.Server][main][0710cad67e8f47667bc7612580d5b91f691dd8262a4187d9eca8cf87229d04aa] Starting server on port: 5044

Hopefully that is enough information for someone to figure this out.

Thanks!

Hi @bigelkman Welcome to the community.

Can you share your Docker compose for all of this?

Is it a single compose or separate?

Also, you did not provide enough of the logstash logs. There should be the section where it's trying to connect to Elasticsearch

now that I look closer. I don't think it's actually running your logstash.conf because we'd also see it open the other ports.

I suspect it's just running the default conf in other words, it's not using your logstash.conf other words, it's not using your logstash.conf It's probably not mounted correctly.

So you need to share your compose.

Hello @stephenb , thank you for the reply. I am running multiple containers on the same host. Here is the docker compose for everything.

version: "3.2"

services:
  setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    container_name: "setup"
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ x${ELASTIC_PASSWORD} == x ]; then
          echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
          exit 1;
        elif [ x${KIBANA_PASSWORD} == x ]; then
          echo "Set the KIBANA_PASSWORD environment variable in the .env file";
          exit 1;
        fi;
        if [ ! -f certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f certs/certs.zip ]; then
          echo "Creating certs";
          echo -ne \
          "instances:\n"\
          "  - name: es01\n"\
          "    dns:\n"\
          "      - es01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: es02\n"\
          "    dns:\n"\
          "      - es02\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: es03\n"\
          "    dns:\n"\
          "      - es03\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: kibana01\n"\
          "    dns:\n"\
          "      - kibana01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          > config/certs/instances.yml;
          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;;
        find . -type f -exec chmod 640 \{\} \;;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    container_name: "es01"
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - ./elasticsearch01.yml:/usr/share/elasticsearch/config/elasticsearch.yml
      - esdata01:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    environment:
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  es02:
    depends_on:
      - es01
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    container_name: "es02"
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - ./elasticsearch02.yml:/usr/share/elasticsearch/config/elasticsearch.yml
      - esdata02:/usr/share/elasticsearch/data
    ports:
      - 9201:9200
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  es03:
    depends_on:
      - es02
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    container_name: "es03"
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - ./elasticsearch03.yml:/usr/share/elasticsearch/config/elasticsearch.yml
      - esdata03:/usr/share/elasticsearch/data
    ports:
      - 9202:9200
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  kibana:
    depends_on:
      es01:
        condition: service_healthy
      es02:
        condition: service_healthy
      es03:
        condition: service_healthy
    image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
    container_name: kibana01
    volumes:
      - certs:/etc/kibana/config/certs
      - kibanadata:/usr/share/kibana/data
      - ./kibana.yml:/usr/share/kibana/config/kibana.yml
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s -k -I https://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  filebeat:
    depends_on:
      kibana:
        condition: service_healthy
    image: docker.elastic.co/beats/filebeat:${STACK_VERSION}
    container_name: "filebeat"
    volumes:
      - certs:/usr/share/filebeat/certs
      - /var/run/docker.sock:/var/run/docker.sock
      - /elastic/containers:/var/lib/docker/containers
      - /var/log/messages:/var/log/messages:ro
      - ./filebeat.docker.yml:/usr/share/filebeat/filebeat.yml
      - /var/log/secure:/var/log/secure:ro
    command: filebeat -e -strict.perms=false
    #command: tail -f /dev/null
    user: root

  heartbeat:
    depends_on:
      kibana:
        condition: service_healthy
    image: docker.elastic.co/beats/heartbeat:${STACK_VERSION}
    container_name: heartbeat
    volumes:
      - certs:/usr/share/heartbeat/certs
      - ./heartbeat.docker.yml:/usr/share/heartbeat/heartbeat.yml
      - ./monitors.d:/usr/share/heartbeat/monitors.d:ro
    #command: tail -f /dev/null

  metricbeat:
    depends_on:
      kibana:
        condition: service_healthy
    image: docker.elastic.co/beats/metricbeat:${STACK_VERSION}
    container_name: "metricbeat"
    volumes:
      - certs:/usr/share/metricbeat/certs
      - /var/run/docker.sock:/var/run/docker.sock
      - /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro
      - /proc:/hostfs/proc:ro
      - /:/hostfs:ro
      - ./metricbeat.docker.yml:/usr/share/metricbeat/metricbeat.yml
      - ./modules.d:/usr/share/metricbeat/modules.d
    command: metricbeat -e -strict.perms=false
    #command: tail -f /dev/null
    user: root

  logstash:
    image: docker.elastic.co/logstash/logstash:${STACK_VERSION}
    container_name: logstash
    ports:
      - 5044:5044
      - 5000:5000
      - 514:514/udp
    volumes:
      - certs:/usr/share/logstash/certs
      - ./logstash.yml:/usr/share/logstash/config/logstash.yml
      - ./logstash.conf:/usr/share/logstash/config/logstash.conf
      - /var/log/cron:/var/log/cron:ro
    depends_on:
      - es01
    #command: tail -f /dev/null



volumes:
  certs:
    driver: local
  esdata01:
    driver: local
  esdata02:
    driver: local
  esdata03:
    driver: local
  esdata04:
    driver: local
  kibanadata:
    driver: local

I think you are right about .conf file not being used. I have run the logstash start command from inside the container and getting more information now.

root@589454b082d2:/usr/share/logstash# ./bin/logstash -f config/logstash.conf
Using bundled JDK: /usr/share/logstash/jdk
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2024-03-03T02:01:24,260][WARN ][deprecation.logstash.runner] NOTICE: Running Logstash as superuser is not recommended and won't be allowed in the future. Set 'allow_superuser' to 'false' to avoid startup errors in future releases.
[2024-03-03T02:01:24,271][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[2024-03-03T02:01:24,272][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.8.0", "jruby.version"=>"jruby 9.3.10.0 (2.6.8) 2023-02-01 107b2e6697 OpenJDK 64-Bit Server VM 17.0.7+7 on 17.0.7+7 +indy +jit [x86_64-linux]"}
[2024-03-03T02:01:24,275][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[2024-03-03T02:01:24,284][INFO ][logstash.settings        ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2024-03-03T02:01:24,286][INFO ][logstash.settings        ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2024-03-03T02:01:24,484][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2024-03-03T02:01:24,494][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"88ef06dc-0715-4356-97b6-9b91dd24877f", :path=>"/usr/share/logstash/data/uuid"}
[2024-03-03T02:01:25,129][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2024-03-03T02:01:25,762][INFO ][org.reflections.Reflections] Reflections took 142 ms to scan 1 urls, producing 132 keys and 464 values
[2024-03-03T02:01:26,654][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "cacert" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Set 'ssl_certificate_authorities' instead. If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"cacert", :plugin=><LogStash::Outputs::ElasticSearch ssl_certificate=>"/usr/share/logstash/certs/logstash/logstash.crt", password=><password>, ssl_key=>"/usr/share/logstash/certs/logstash/logstash.pkcs8.key", hosts=>[https://es01:9200, https://es02:9200, https://es03:9200], ssl_enabled=>true, cacert=>"/usr/share/logstash/certs/ca/ca.crt", ssl_verification_mode=>"none", id=>"6bc181a4d6d0ef448181c2f2c3c45949e00b34e50853423e462f532084b4a403", user=>"logstash_internal", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_3e8b053b-e1c6-442d-a0a8-93d1f89c741c", enable_metric=>true, charset=>"UTF-8">, workers=>1, ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false, retry_initial_interval=>2, retry_max_interval=>64, data_stream_type=>"logs", data_stream_dataset=>"generic", data_stream_namespace=>"default", data_stream_sync_fields=>true, data_stream_auto_routing=>true, manage_template=>true, template_overwrite=>false, template_api=>"auto", doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", dlq_on_failed_indexname_interpolation=>true>}
[2024-03-03T02:01:26,675][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-03-03T02:01:26,706][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://es01:9200", "https://es02:9200", "https://es03:9200"]}
[2024-03-03T02:01:26,710][WARN ][logstash.outputs.elasticsearch][main] You have enabled encryption but DISABLED certificate verification, to make sure your data is secure set `ssl_verification_mode => full`
[2024-03-03T02:01:26,894][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://logstash_internal:xxxxxx@es01:9200/, https://logstash_internal:xxxxxx@es02:9200/, https://logstash_internal:xxxxxx@es03:9200/]}}
[2024-03-03T02:01:27,231][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://logstash_internal:xxxxxx@es01:9200/"}
[2024-03-03T02:01:27,241][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.8.0) {:es_version=>8}
[2024-03-03T02:01:27,241][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2024-03-03T02:01:27,593][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://logstash_internal:xxxxxx@es02:9200/"}
[2024-03-03T02:01:27,829][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://logstash_internal:xxxxxx@es03:9200/"}
[2024-03-03T02:01:27,918][INFO ][logstash.outputs.elasticsearch][main] Data streams auto configuration (`data_stream => auto` or unset) resolved to `true`
[2024-03-03T02:01:27,919][WARN ][logstash.outputs.elasticsearch][main] Elasticsearch Output configured with `ecs_compatibility => v8`, which resolved to an UNRELEASED preview of version 8.0.0 of the Elastic Common Schema. Once ECS v8 and an updated release of this plugin are publicly available, you will need to update this plugin to resolve this warning.
[2024-03-03T02:01:27,946][WARN ][logstash.filters.grok    ][main] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[2024-03-03T02:01:28,168][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/config/logstash.conf"], :thread=>"#<Thread:0x57cbd1ff@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-03-03T02:01:29,059][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.89}
[2024-03-03T02:01:29,182][INFO ][logstash.inputs.file     ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_1e3cf6fba627b1a5d046b00da35fa7cb", :path=>["/var/log/cron"]}
[2024-03-03T02:01:29,187][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2024-03-03T02:01:29,192][INFO ][logstash.inputs.tcp      ][main][b9712cca12503e9d0b900a489e88a6bb7b7969f2b3ed9326bc96927f84a77e78] Starting tcp input listener {:address=>"0.0.0.0:514", :ssl_enable=>false}
[2024-03-03T02:01:29,223][INFO ][filewatch.observingtail  ][main][294a4a88d1dd162d7ab104c9d9c0b5045272361d9d09910bde58e221ff6b6661] START, creating Discoverer, Watch with file and sincedb collections
[2024-03-03T02:01:29,234][INFO ][logstash.inputs.tcp      ][main][3781061f0d60f76406bb601dc94256b0917cc197036cfdf57209514741803dbf] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enable=>false}
[2024-03-03T02:01:29,239][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2024-03-03T02:01:29,242][INFO ][org.logstash.beats.Server][main][b8952b8e7e68061fb9730fd1300e6e7dd04ab8dbc535488fec921bc175666f3a] Starting server on port: 5044
[2024-03-03T02:01:29,246][INFO ][logstash.inputs.udp      ][main][94ed349e0390fdfec9474182c601d66ebfe52ea44fc77aafb2a4aff6a59adc40] Starting UDP listener {:address=>"0.0.0.0:514"}
[2024-03-03T02:01:29,260][INFO ][logstash.inputs.udp      ][main][94ed349e0390fdfec9474182c601d66ebfe52ea44fc77aafb2a4aff6a59adc40] UDP listener started {:address=>"0.0.0.0:514", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[2024-03-03T02:01:29,270][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2024-03-03T02:01:31,071][INFO ][logstash.outputs.elasticsearch][main][6bc181a4d6d0ef448181c2f2c3c45949e00b34e50853423e462f532084b4a403] Retrying failed action {:status=>403, :action=>["create", {:_id=>nil, :_index=>"logs-generic-default", :routing=>nil}, {"@timestamp"=>2024-03-03T02:01:30.800777918Z, "tags"=>["_grokparsefailure"], "message"=>"<166>2024-03-03T02:01:30.776Z esxi67.local Vpxa: info vpxa[2100035] [Originator@6876 sub=vpxaInvtHost] Increment master gen. no to (33720): Event:VpxaEventHostd::CheckQueuedEvents\n", "event"=>{"original"=>"<166>2024-03-03T02:01:30.776Z esxi67.local Vpxa: info vpxa[2100035] [Originator@6876 sub=vpxaInvtHost] Increment master gen. no to (33720): Event:VpxaEventHostd::CheckQueuedEvents\n"}, "type"=>"syslog", "@version"=>"1", "host"=>{"ip"=>"192.168.1.40"}, "data_stream"=>{"type"=>"logs", "dataset"=>"generic", "namespace"=>"default"}}], :error=>{"type"=>"security_exception", "reason"=>"action [indices:admin/auto_create] is unauthorized for user [logstash_internal] with effective roles [logstash_writer] on indices [logs-generic-default], this action is granted by the index privileges [auto_configure,create_index,manage,all]"}}
[2024-03-03T02:01:31,072][INFO ][logstash.outputs.elasticsearch][main][6bc181a4d6d0ef448181c2f2c3c45949e00b34e50853423e462f532084b4a403] Retrying failed action {:status=>403, :action=>["create", {:_id=>nil, :_index=>"logs-generic-default", :routing=>nil}, {"@timestamp"=>2024-03-03T02:01:30.904085884Z, "tags"=>["_grokparsefailure"], "message"=>"<164>2024-03-03T02:01:30.882Z esxi67.local Vpxa: warning vpxa[2099609] [Originator@6876 sub=hostdstats] Host to vpxd translation is empty, dropping results\n", "event"=>{"original"=>"<164>2024-03-03T02:01:30.882Z esxi67.local Vpxa: warning vpxa[2099609] [Originator@6876 sub=hostdstats] Host to vpxd translation is empty, dropping results\n"}, "type"=>"syslog", "@version"=>"1", "host"=>{"ip"=>"192.168.1.40"}, "data_stream"=>{"type"=>"logs", "dataset"=>"generic", "namespace"=>"default"}}], :error=>{"type"=>"security_exception", "reason"=>"action [indices:admin/auto_create] is unauthorized for user [logstash_internal] with effective roles [logstash_writer] on indices [logs-generic-default], this action is granted by the index privileges [auto_configure,create_index,manage,all]"}}
[2024-03-03T02:01:31,073][INFO ][logstash.outputs.elasticsearch][main][6bc181a4d6d0ef448181c2f2c3c45949e00b34e50853423e462f532084b4a403] Retrying individual bulk actions that failed or were rejected by the previous bulk request {:count=>2}
[2024-03-03T02:01:32,848][INFO ][logstash.outputs.elasticsearch][main][6bc181a4d6d0ef448181c2f2c3c45949e00b34e50853423e462f532084b4a403] Retrying failed action {:status=>403, :action=>["create", {:_id=>nil, :_index=>"logs-generic-default", :routing=>nil}, {"@timestamp"=>2024-03-03T02:01:32.687416552Z, "tags"=>["_grokparsefailure"], "message"=>"<163>2024-03-03T02:01:32.662Z esxi67.local Hostd: -->\n", "event"=>{"original"=>"<163>2024-03-03T02:01:32.662Z esxi67.local Hostd: -->\n"}, "type"=>"syslog", "@version"=>"1", "host"=>{"ip"=>"192.168.1.40"}, "data_stream"=>{"type"=>"logs", "dataset"=>"generic", "namespace"=>"default"}}], :error=>{"type"=>"security_exception", "reason"=>"action [indices:admin/auto_create] is unauthorized for user [logstash_internal] with effective roles [logstash_writer] on indices [logs-generic-default], this action is granted by the index privileges [auto_configure,create_index,manage,all]"}}
[2024-03-03T02:01:32,850][INFO ][logstash.outputs.elasticsearch][main][6bc181a4d6d0ef448181c2f2c3c45949e00b34e50853423e462f532084b4a403] Retrying individual bulk actions that failed or were rejected by the previous bulk request {:count=>1}
[2024-03-03T02:01:32,863][INFO ][logstash.outputs.elasticsearch][main][6bc181a4d6d0ef448181c2f2c3c45949e00b34e50853423e462f532084b4a403] Retrying failed action {:status=>403, :action=>["create", {:_id=>nil, :_index=>"logs-generic-default", :routing=>nil}, {"@timestamp"=>2024-03-03T02:01:32.686886359Z, "tags"=>["_grokparsefailure"], "message"=>"<163>2024-03-03T02:01:32.662Z esxi67.local Hostd: error hostd[2114082] [Originator@6876 sub=Default] [LikewiseGetDomainJoinInfo:354] QueryInformation(): ERROR_FILE_NOT_FOUND (2/0):\n", "event"=>{"original"=>"<163>2024-03-03T02:01:32.662Z esxi67.local Hostd: error hostd[2114082] [Originator@6876 sub=Default] [LikewiseGetDomainJoinInfo:354] QueryInformation(): ERROR_FILE_NOT_FOUND (2/0):\n"}, "type"=>"syslog", "@version"=>"1", "host"=>{"ip"=>"192.168.1.40"}, "data_stream"=>{"type"=>"logs", "dataset"=>"generic", "namespace"=>"default"}}], :error=>{"type"=>"security_exception", "reason"=>"action [indices:admin/auto_create] is unauthorized for user [logstash_internal] with effective roles [logstash_writer] on indices [logs-generic-default], this action is granted by the index privileges [auto_configure,create_index,manage,all]"}}

Now it just looks like a perm/role issue.

Your pipeline is mounted wrong place take a closer look at this that should solve that part.

Always make sure you look at the correct versions of the docs

The other perm issue not sure how you setup

Perhaps look at

1 Like

I was able to get it up and running by adding the command "command: logstash -f config/logstash.conf" to the docker compose file.

  logstash:
    image: docker.elastic.co/logstash/logstash:${STACK_VERSION}
    container_name: logstash
    ports:
      - 5044:5044
      - 5000:5000
      - 514:514/udp
    volumes:
      - certs:/usr/share/logstash/certs
      - ./logstash.yml:/usr/share/logstash/config/logstash.yml
      - ./logstash.conf:/usr/share/logstash/config/logstash.conf:rw
    user: root
    depends_on:
      kibana:
        condition: service_healthy
    command: logstash -f config/logstash.conf

The permission issue was fixed by setting up logstash_internal user and using that to connect. Thanks for the assitance.

1 Like

Right but if you had just mounted correctly it would all work AND you can add more config etc... but glad you got it working..

I guess I am confused on pipelines.yml vs logstash.yml. The doc page you provided is pointing to the same location I had my logstash.yml file that was not working.

docker run --rm -it -v ~/settings/logstash.yml:/usr/share/logstash/config/logstash.yml docker.elastic.co/logstash/logstash:8.8.2

By default, the container will look in /usr/share/logstash/pipeline/
for pipeline configuration files

This is the default configuration for the image, defined in /usr/share/logstash/pipeline/logstash.conf

./pipeline
Not
./config

docker run --rm -it -v ~/pipeline/:/usr/share/logstash/pipeline/ docker.elastic.co/logstash/logstash:8.8.2

My point is volumes need to be specific... And where default files are..

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.