Not eligible for data streams because config contains one or more settings that are not compatible with data streams: {"index"=>"logstash-%{+YYYY.MM.dd}"}

Hi, I am working on the ELK stack using Docker compose. I am following this tutorial: Getting started with the Elastic Stack and Docker-Compose | Elastic Blog. Everything works except Logstash. When I run docker-compose I get this error: [logstash.outputs.elasticsearch][main] Not eligible for data streams because config contains one or more settings that are not compatible with data streams: {"index"=>"logstash-%{+YYYY.MM.dd}"}.

This is my logstash.conf file; it is almost the same as in the tutorial above.

input {
 file {
   #default is TAIL which assumes more data will come into the file.
   #change to mode => "read" if the file is a compelte file.  by default, the file will be removed once reading is complete -- backup your files if you need>   mode => "tail"
   path => "/var/log/docker/docker-valheim.log"

filter {

output {
 elasticsearch {
   index => "logstash-%{+YYYY.MM.dd}"
   hosts=> "${ELASTIC_HOSTS}"
   user=> "${ELASTIC_USER}"
   password=> "${ELASTIC_PASSWORD}"
   cacert=> "certs/ca/ca.crt"

This is my logstash service in docker-compose.yml file

        condition: service_healthy
        condition: service_healthy
    container_name: logstash
      co.elastic.logs/module: logstash
    user: root
      - certs:/usr/share/logstash/certs
      - logstashdata01:/var/log/docker
      - "./logstash_ingest_data/:/usr/share/logstash/ingest_data/"
      - "./logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro"
      - xpack.monitoring.enabled=false
      - ELASTIC_USER=elastic
      - ELASTIC_HOSTS=https://es01:9200

I also tried using something like this but still get the same error ...

 index => "logstash"

Hi @Dinoo Welcome to the community!

What version of the stack?

Hi @stephenb, thank you for the warm welcome. My stack version is 8.7.1.


Logstash 8 per default will try to write on data streams, since you have the index option in the output to write to normal indices you need to disable it.

Add the option data_stream => false to your elasticsearch output and it should work.

Hi @leandrojmp, Thank you for your quick response. I've added data_stream => false to my logstash.conf file, and the error disappeared. But when I try to create a data view and type logstash* in the "Index pattern" field, I get the following message:


Here are the logs:

logstash      | Sending Logstash logs to /usr/share/logstash/logs which is now configured via
logstash      | [2023-09-25T22:32:17,029][WARN ][deprecation.logstash.runner] NOTICE: Running Logstash as superuser is not recommended and won't be allowed in the future. Set 'allow_superuser' to 'false' to avoid startup errors in future releases.
logstash      | [2023-09-25T22:32:17,048][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/
logstash      | [2023-09-25T22:32:17,050][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.7.1", "jruby.version"=>"jruby (2.6.8) 2023-02-01 107b2e6697 OpenJDK 64-Bit Server VM 17.0.7+7 on 17.0.7+7 +indy +jit [x86_64-linux]"}
logstash      | [2023-09-25T22:32:17,056][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError,, -Dlog4j2.isThreadContextMapInheritable=true, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/, -Djruby.regexp.interruptible=true,, --add-exports=jdk.compiler/, --add-exports=jdk.compiler/, --add-exports=jdk.compiler/, --add-exports=jdk.compiler/, --add-exports=jdk.compiler/, --add-opens=java.base/, --add-opens=java.base/, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/,]
logstash      | [2023-09-25T22:32:18,036][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
logstash      | [2023-09-25T22:32:18,586][INFO ][org.reflections.Reflections] Reflections took 206 ms to scan 1 urls, producing 132 keys and 462 values
logstash      | [2023-09-25T22:32:19,011][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
logstash      | [2023-09-25T22:32:19,029][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://es01:9200"]}
logstash      | [2023-09-25T22:32:19,204][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx@es01:9200/]}}
logstash      | [2023-09-25T22:32:19,485][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://elastic:xxxxxx@es01:9200/"}
logstash      | [2023-09-25T22:32:19,495][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.7.1) {:es_version=>8}
logstash      | [2023-09-25T22:32:19,495][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
logstash      | [2023-09-25T22:32:19,510][WARN ][logstash.outputs.elasticsearch][main] Elasticsearch Output configured with `ecs_compatibility => v8`, which resolved to an UNRELEASED preview of version 8.0.0 of the Elastic Common Schema. Once ECS v8 and an updated release of this plugin are publicly available, you will need to update this plugin to resolve this warning.
logstash      | [2023-09-25T22:32:19,524][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>8, :ecs_compatibility=>:v8}
logstash      | [2023-09-25T22:32:19,532][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x28662611@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
logstash      | [2023-09-25T22:32:20,423][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.89}
logstash      | [2023-09-25T22:32:20,470][INFO ][logstash.inputs.file     ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_58135e48542f6990bba1ac621e1c7fce", :path=>["/var/log/docker/docker-valheim.log"]}
logstash      | [2023-09-25T22:32:20,474][INFO ][logstash.javapipeline    ][main] Pipeline started {""=>"main"}
logstash      | [2023-09-25T22:32:20,480][INFO ][filewatch.observingtail  ][main][8a8a57189f722cb8c2a51e15426fc81e5b0903e7dbd0a0a3451ca75c14c78001] START, creating Discoverer, Watch with file and sincedb collections
logstash      | [2023-09-25T22:32:20,494][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}

Go to Stack Management > Index Management and see if you indeed have an index starting with logstash in the name.

Also, is the file /var/log/docker/docker-valheim.log still being written or it is already complete?

Under Stack Management > Index Management, I don't see index starting with logstash in the name.

File /var/log/docker/docker-valheim.log is still being written.

Hi guys, Just to let you know, I figured out why I wasn't seeing the logstash index. I accidentally mistyped the path to my log file in the logstash.conf file. After a simple change, everything works fine. Thank you for your help, guys.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.