Data is not coming through in Elastic

Hello,

I am running the Elastic stack in docker desktop.
This if the flow my data goes through :

Filebeats -> Logstash -> Elastic

However, when logstash is sending data to elastic I dont see any indexes being created.

When I manually tried sending a POST request to create an index that did work(However, logstash_system did not have enough permissions so I used the elastic super user.)

I dont see any sort of error or warnings in the logs.
And I cannot really find anything that could cause the issue. Would anyone have any ideas?

This is my docker-compose :

services:
  elasticsearch:
    image: elasticsearch:8.15.0
    container_name: elasticsearch
    environment:
      - discovery.type=single-node
      - xpack.security.enabled=true
      - xpack.security.transport.ssl.enabled=false
      - xpack.security.http.ssl.enabled=false
      - ELASTIC_PASSWORD=test123
    networks:
      - elk-network
    ports:
      - "9200:9200"
      - "9300:9300"
    volumes:
      - elkdata1:/usr/share/elasticsearch/data
      - elkconfig:/usr/share/elasticsearch/config

  kibana:
    image: kibana:8.15.0
    container_name: kibana
    environment:
      - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=test123
    networks:
      - elk-network
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch

  logstash:
    image: logstash:8.15.0
    container_name: logstash
    environment:
      - xpack.monitoring.elasticsearch.hosts=http://elasticsearch:9200
      - xpack.monitoring.elasticsearch.username=elastic   
      - xpack.monitoring.elasticsearch.password=test123
    networks:
      - elk-network
    ports:
      - "5044:5044"
      - "9600:9600"  # Monitoring API port
    depends_on:
      - elasticsearch
    volumes:
      - C:/ELK/logstash.conf:/usr/share/logstash/config/logstash.conf

  filebeat:
    image: docker.elastic.co/beats/filebeat:8.15.0
    entrypoint: ["sh", "-c", "chmod go-w /usr/share/filebeat/filebeat.yml && filebeat -e"]
    container_name: filebeat
    user: root
    volumes:
      - C:/ELK/TestDMTLogElastic:/usr/share/filebeat/data
      - ./filebeat.yml:/usr/share/filebeat/filebeat.yml
    networks:
      - elk-network
    depends_on:
      - logstash
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "3"
    

networks:
  elk-network:
    driver: bridge

volumes:
  elkdata1:
  elkconfig:

This is my filebeat.yml

filebeat.inputs:
- type: filestream
  id : DMT-log-pipeline
  paths:
    - /usr/share/filebeat/data/*.txt
  ignore_older: "0s"

output.logstash:
  hosts: ["logstash:5044"]

logging.level: debug

This is my logstash.conf

input {
  beats {
    port => 5044  # Filebeat port
    start_position => "beginning"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601} \[\w+\] Start"
      negate => "true"
      what => "previous"
    }
  }
}

filter {
#to add
  }
}
output {
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]  # Elasticsearch URL
    user => "logstash_system"
    password => "test123"
    ssl => false
    index => "logs-%{+YYYY.MM.dd}"  # Index pattern 
    document_id => "%{[@metadata][_id]}"  # Optional: Unique ID for the document
  }
  stdout { codec => rubydebug }  # Optional: For debugging outputs to the console
}

There is no start_position option on a beats input, so logstash will not start. Somewhere an error message is being logged.

[2024-08-21T12:09:29,246][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: (ConfigurationError) Something is wrong with your configuration.", :backtrace=>[...]}
[2024-08-21T12:09:33,050][ERROR][logstash.inputs.beats ] Unknown setting 'start_position' for beats

Also, do not use a multiline codec on a beats input. Do the multiline processing in filebeat itself.

Thanks for your response.

I've done both :

-Removed the start_position

-Removed the multine from logstash.conf and added it to filebeat.yml.

As soon as I add a file I can see in my logs logstash is processing the data but for some reason its not arriving in Elasticsearch.

2024-08-21 18:56:02 logstash       |          "input" => {
2024-08-21 18:56:02 logstash       |         "type" => "filestream"
2024-08-21 18:56:02 logstash       |     },
2024-08-21 18:56:02 logstash       |            "ecs" => {
2024-08-21 18:56:02 logstash       |         "version" => "8.0.0"
2024-08-21 18:56:02 logstash       |     },
2024-08-21 18:56:02 logstash       |           "tags" => [
2024-08-21 18:56:02 logstash       |         [0] "beats_input_codec_plain_applied"
2024-08-21 18:56:02 logstash       |     ],
2024-08-21 18:56:02 logstash       |       "@version" => "1",
2024-08-21 18:56:02 logstash       |          "event" => {
2024-08-21 18:56:02 logstash       |         "original" => "2024-08-08 23:51:07.137 +02:00 [INF] test"
2024-08-21 18:56:02 logstash       |     },
2024-08-21 18:56:02 logstash       |          "agent" => {
2024-08-21 18:56:02 logstash       |                 "name" => "926f560ac7bc",
2024-08-21 18:56:02 logstash       |              "version" => "8.15.0",
2024-08-21 18:56:02 logstash       |         "ephemeral_id" => "d65c235e-1149-4bf2-8d95-352eb0ec7db0",
2024-08-21 18:56:02 logstash       |                 "type" => "filebeat",
2024-08-21 18:56:02 logstash       |                   "id" => "eed035d7-5d57-41e7-a161-b2dc3f54e579"
2024-08-21 18:56:02 logstash       |     },
2024-08-21 18:56:02 logstash       |            "log" => {
2024-08-21 18:56:02 logstash       |         "offset" => 1178930,
2024-08-21 18:56:02 logstash       |           "file" => {
2024-08-21 18:56:02 logstash       |             "device_id" => "83",
2024-08-21 18:56:02 logstash       |                 "inode" => "106679016173366189",
2024-08-21 18:56:02 logstash       |                  "path" => "/usr/share/filebeat/data/log20240808.txt"
2024-08-21 18:56:02 logstash       |         }
2024-08-21 18:56:02 logstash       |     },
2024-08-21 18:56:02 logstash       |           "host" => {
2024-08-21 18:56:02 logstash       |         "name" => "926f560ac7bc"
2024-08-21 18:56:02 logstash       |     },
2024-08-21 18:56:02 logstash       |     "@timestamp" => 2024-08-21T16:55:51.716Z,
2024-08-21 18:56:02 logstash       |        "message" => "2024-08-08 23:51:07.137 +02:00 [INF] test"
2024-08-21 18:56:02 logstash       | }

If Logstash can not index any events, it will throw an error, check if you have any ERROR or WARN logs.

Also, the index name you are using, index => "logs-%{+YYYY.MM.dd}", may be an issue because on version 8.X there are built-in templates that will match anything starting with logs-* and create a data stream, not a normal index.

Where and did you check for new logs?

Is anything setting that field? If not, then either elasticsearch will keep overwriting the same document over and over again, or if that is an invalid id then every attempt to index a document will be logging an error message.

If I add a file with some data I can see logstash processes whats in it. As far as I can see it does not throw any errors at the end. I'm checking in the logs of my containers in docker desktop.

I've simplied my index to just be : "logs-". Nothing appears as of yet.

Thanks for the suggestion, I dont believe anything is setting it. I've commented it out for now. No succes yet

There are more logstash logs, can you filter for only logstash logs and share everything?

The one you shared is for the stdout output, it does not help, you need to share the logs for the elasticsearch output, if there is any issue it will shown in the logs.

It doesn't matter, anything that starts with logs- will match the built-in template.

Using bundled JDK: /usr/share/logstash/jdk
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2024-08-21T17:24:37,796][WARN ][deprecation.logstash.settings] The setting `http.host` is a deprecated alias for `api.http.host` and will be removed in a future release of Logstash. Please use api.http.host instead
[2024-08-21T17:24:37,820][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[2024-08-21T17:24:37,823][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.15.0", "jruby.version"=>"jruby 9.4.8.0 (3.1.4) 2024-07-02 4d41e55a67 OpenJDK 64-Bit Server VM 21.0.4+7-LTS on 21.0.4+7-LTS +indy +jit [x86_64-linux]"}
[2024-08-21T17:24:37,829][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[2024-08-21T17:24:37,834][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
[2024-08-21T17:24:37,835][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
[2024-08-21T17:24:37,843][INFO ][logstash.settings        ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2024-08-21T17:24:37,849][INFO ][logstash.settings        ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2024-08-21T17:24:38,243][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"ce8725b0-80ec-474e-951e-0e70c50532c8", :path=>"/usr/share/logstash/data/uuid"}
[2024-08-21T17:24:39,143][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2024-08-21T17:24:39,143][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
Please configure Elastic Agent to monitor Logstash. Documentation can be found at: 
https://www.elastic.co/guide/en/logstash/current/monitoring-with-elastic-agent.html
[2024-08-21T17:24:39,889][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@elasticsearch:9200/]}}
[2024-08-21T17:24:40,066][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"Connect to elasticsearch:9200 [elasticsearch/172.18.0.2] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to elasticsearch:9200 [elasticsearch/172.18.0.2] failed: Connection refused>}
[2024-08-21T17:24:40,069][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.18.0.2] failed: Connection refused"}
[2024-08-21T17:24:40,089][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"Connect to elasticsearch:9200 [elasticsearch/172.18.0.2] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to elasticsearch:9200 [elasticsearch/172.18.0.2] failed: Connection refused>}
[2024-08-21T17:24:40,091][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.18.0.2] failed: Connection refused {:url=>http://elastic:xxxxxx@elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.18.0.2] failed: Connection refused", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2024-08-21T17:24:40,097][WARN ][logstash.licensechecker.licensereader] Attempt to fetch Elasticsearch cluster info failed. Sleeping for 0.02 {:fail_count=>1, :exception=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.18.0.2] failed: Connection refused"}
[2024-08-21T17:24:40,120][ERROR][logstash.licensechecker.licensereader] Unable to retrieve Elasticsearch cluster info. {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError}
[2024-08-21T17:24:40,123][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2024-08-21T17:24:40,155][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
[2024-08-21T17:24:40,338][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2024-08-21T17:24:40,747][INFO ][org.reflections.Reflections] Reflections took 251 ms to scan 1 urls, producing 138 keys and 481 values
[2024-08-21T17:24:41,484][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-08-21T17:24:41,557][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x855f367 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-08-21T17:24:43,125][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>1.57}
[2024-08-21T17:24:43,142][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2024-08-21T17:24:43,169][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2024-08-21T17:24:43,200][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2024-08-21T17:24:43,381][INFO ][org.logstash.beats.Server][main][0710cad67e8f47667bc7612580d5b91f691dd8262a4187d9eca8cf87229d04aa] Starting server on port: 5044
[2024-08-21T17:25:10,146][ERROR][logstash.licensechecker.licensereader] Unable to retrieve Elasticsearch cluster info. {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError}
[2024-08-21T17:25:10,147][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2024-08-21T17:25:10,197][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@elasticsearch:9200/"}
[2024-08-21T17:25:10,217][INFO ][logstash.licensechecker.licensereader] Elasticsearch version determined (8.15.0) {:es_version=>8}
[2024-08-21T17:25:10,218][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2024-08-21T17:25:40,170][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2024-08-21T17:25:40,170][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2024-08-21T17:25:40,429][INFO ][logstash.javapipeline    ] Pipeline `.monitoring-logstash` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-08-21T17:25:40,439][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://elasticsearch:9200"]}
[2024-08-21T17:25:40,444][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@elasticsearch:9200/]}}
[2024-08-21T17:25:40,462][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@elasticsearch:9200/"}
[2024-08-21T17:25:40,463][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch version determined (8.15.0) {:es_version=>8}
[2024-08-21T17:25:40,463][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2024-08-21T17:25:40,471][WARN ][logstash.javapipeline    ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
[2024-08-21T17:25:40,472][INFO ][logstash.javapipeline    ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x2565e3c6 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-08-21T17:25:40,484][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>0.01}
[2024-08-21T17:25:40,489][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2024-08-21T17:25:40,502][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:main, :".monitoring-logstash"], :non_running_pipelines=>[]}
{
         "agent" => {
                "name" => "f7aac19a2e9b",
             "version" => "8.15.0",
        "ephemeral_id" => "d7628ec3-b00e-4ce8-8901-dc638801b4c5",
                "type" => "filebeat",
                  "id" => "eed035d7-5d57-41e7-a161-b2dc3f54e579"
    },
    "@timestamp" => 2024-08-21T17:29:53.615Z,
      "@version" => "1",
         "input" => {
        "type" => "filestream"
    },
           "ecs" => {
        "version" => "8.0.0"
    },
           "log" => {
        "offset" => 54,
          "file" => {
                "inode" => "6192449488337721",
            "device_id" => "83",
                 "path" => "/usr/share/filebeat/data/log20240801.txt"
        }
    },
          "host" => {
        "name" => "f7aac19a2e9b"
    },
         "event" => {
        "original" => "2024-08-08 00:18:57.808 +02:00 [INF] Start DMTService."
    },
          "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
       "message" => "2024-08-08 00:18:57.808 +02:00 [INF] Start DMTService."
}
{
         "agent" => {
                  "id" => "eed035d7-5d57-41e7-a161-b2dc3f54e579",
                "name" => "f7aac19a2e9b",
        "ephemeral_id" => "d7628ec3-b00e-4ce8-8901-dc638801b4c5",
                "type" => "filebeat",
             "version" => "8.15.0"
    },
    "@timestamp" => 2024-08-21T17:29:53.616Z,
      "@version" => "1",
         "input" => {
        "type" => "filestream"
    },
           "ecs" => {
        "version" => "8.0.0"
    },
           "log" => {
        "offset" => 946,
          "file" => {
                "inode" => "6192449488337721",
            "device_id" => "83",
                 "path" => "/usr/share/filebeat/data/log20240801.txt"
        }
    },
          "host" => {
        "name" => "f7aac19a2e9b"
    },
         "event" => {
        "original" => "2024-08-08 00:18:59.109 +02:00 [DBG] First script: UPDATE im_dmt0 SET  actieswitch = 1, [UPDATETIMESTAMP] = getDate() WHERE gguid = 0x6A1D3EE31EC64E59A124297E661CD3BC"
    },
          "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
       "message" => "2024-08-08 00:18:59.109 +02:00 [DBG] First script: UPDATE im_dmt0 SET  actieswitch = 1, [UPDATETIMESTAMP] = getDate() WHERE gguid = 0x6A1D3EE31EC64E59A124297E661CD3BC"
}
{
         "agent" => {
                "name" => "f7aac19a2e9b",
             "version" => "8.15.0",
        "ephemeral_id" => "d7628ec3-b00e-4ce8-8901-dc638801b4c5",
                "type" => "filebeat",
                  "id" => "eed035d7-5d57-41e7-a161-b2dc3f54e579"
    },
    "@timestamp" => 2024-08-21T17:29:53.616Z,
      "@version" => "1",
         "input" => {
        "type" => "filestream"
    },
           "ecs" => {
        "version" => "8.0.0"
    },
           "log" => {
        "offset" => 679,
          "file" => {
                "inode" => "6192449488337721",
            "device_id" => "83",
                 "path" => "/usr/share/filebeat/data/log20240801.txt"
        }
    },
          "host" => {
        "name" => "f7aac19a2e9b"
    },
         "event" => {
        "original" => "2024-08-08 00:18:59.105 +02:00 [DBG] First script: UPDATE im_dmt0 SET  actieswitch = 1, [UPDATETIMESTAMP] = getDate() WHERE gguid = 0x3A270AEE44DF49B793A8898DD32C99D8"
    },
          "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
       "message" => "2024-08-08 00:18:59.105 +02:00 [DBG] First script: UPDATE im_dmt0 SET  actieswitch = 1, [UPDATETIMESTAMP] = getDate() WHERE gguid = 0x3A270AEE44DF49B793A8898DD32C99D8"
}
{
         "agent" => {
                "name" => "f7aac19a2e9b",
             "version" => "8.15.0",
        "ephemeral_id" => "d7628ec3-b00e-4ce8-8901-dc638801b4c5",
                "type" => "filebeat",
                  "id" => "eed035d7-5d57-41e7-a161-b2dc3f54e579"
    },
    "@timestamp" => 2024-08-21T17:29:53.616Z,
      "@version" => "1",
         "input" => {
        "type" => "filestream"
    },
           "ecs" => {
        "version" => "8.0.0"
    },
           "log" => {
        "offset" => 2352,
          "file" => {
                "inode" => "6192449488337721",
            "device_id" => "83",
                 "path" => "/usr/share/filebeat/data/log20240801.txt"
        }
    },
          "host" => {
        "name" => "f7aac19a2e9b"
    },
         "event" => {
        "original" => "2024-08-08 00:18:59.118 +02:00 [DBG] First script: UPDATE im_dmt0 SET  actieswitch = 1, [UPDATETIMESTAMP] = getDate() WHERE gguid = 0x64A78B7D64CD444CA65AC87D81ABD5A8"
    },
          "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
       "message" => "2024-08-08 00:18:59.118 +02:00 [DBG] First script: UPDATE im_dmt0 SET  actieswitch = 1, [UPDATETIMESTAMP] = getDate() WHERE gguid = 0x64A78B7D64CD444CA65AC87D81ABD5A8"
}
{
         "agent" => {
                "name" => "f7aac19a2e9b",
             "version" => "8.15.0",
        "ephemeral_id" => "d7628ec3-b00e-4ce8-8901-dc638801b4c5",
                "type" => "filebeat",
                  "id" => "eed035d7-5d57-41e7-a161-b2dc3f54e579"
    },
    "@timestamp" => 2024-08-21T17:29:53.612Z,
      "@version" => "1",
         "input" => {
        "type" => "filestream"
    },
     

Extracted all the logs from my container to a text file and copied it here. As far as I could see there was no critical error.

Blockquote
It doesn't matter, anything that starts with logs- will match the built-in template.

Ah ok. Because my filename is "log20240808" so im not sure that will automatically match.

@leandrojmp @Badger I have found the issue.

My logstash.conf was mounted in the config directory. Should have been in the pipeline directory.

Thank you for the help.

One more question :


My elastic is not recognizing the data is already coming from filebeat and is asking for an install. Any way to solve this?