Logstash doesn't send the logs to Elastic Cloud

I'm trying to send some index dedicated elasticsearch cluster to elasticsearch cloud.

 sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-sample.conf 
Using bundled JDK: /usr/share/logstash/jdk
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2024-11-12 21:03:31.790 [main] runner - NOTICE: Running Logstash as superuser is not recommended and won't be allowed in the future. Set 'allow_superuser' to 'false' to avoid startup errors in future releases.
[INFO ] 2024-11-12 21:03:31.800 [main] runner - Starting Logstash {"logstash.version"=>"8.15.3", "jruby.version"=>"jruby 9.4.8.0 (3.1.4) 2024-07-02 4d41e55a67 OpenJDK 64-Bit Server VM 21.0.4+7-LTS on 21.0.4+7-LTS +indy +jit [x86_64-linux]"}
[INFO ] 2024-11-12 21:03:31.803 [main] runner - JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[INFO ] 2024-11-12 21:03:31.807 [main] runner - Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
[INFO ] 2024-11-12 21:03:31.807 [main] runner - Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
[WARN ] 2024-11-12 21:03:31.957 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2024-11-12 21:03:32.358 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9601, :ssl_enabled=>false}
[INFO ] 2024-11-12 21:03:32.629 [Converge PipelineAction::Create<main>] Reflections - Reflections took 74 ms to scan 1 urls, producing 138 keys and 481 values
[INFO ] 2024-11-12 21:03:32.931 [Converge PipelineAction::Create<main>] javapipeline - Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[INFO ] 2024-11-12 21:03:32.951 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1"]}
[INFO ] 2024-11-12 21:03:33.048 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx@dc7d2a736da94b29bd3fc5c1d954e392.us-central1.gcp.cloud.es.io:443/]}}
[WARN ] 2024-11-12 21:03:34.149 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"https://elastic:xxxxxx@dc7d2a736da94b29bd3fc5c1d954e392.us-central1.gcp.cloud.es.io:443/"}
[INFO ] 2024-11-12 21:03:34.149 [[main]-pipeline-manager] elasticsearch - Elasticsearch version determined (8.15.3) {:es_version=>8}
[WARN ] 2024-11-12 21:03:34.149 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[INFO ] 2024-11-12 21:03:34.316 [[main]-pipeline-manager] elasticsearch - Data streams auto configuration (`data_stream => auto` or unset) resolved to `true`
[INFO ] 2024-11-12 21:03:34.347 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1500, "pipeline.sources"=>["/etc/logstash/conf.d/logstash-sample.conf"], :thread=>"#<Thread:0x5fa847a8 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[INFO ] 2024-11-12 21:03:35.119 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>0.77}
[INFO ] 2024-11-12 21:03:35.352 [[main]-pipeline-manager] elasticsearch - `search_api => auto` resolved to `scroll` {:elasticsearch=>"7.17.23"}
[INFO ] 2024-11-12 21:03:35.355 [[main]-pipeline-manager] elasticsearch - ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[INFO ] 2024-11-12 21:03:35.356 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2024-11-12 21:03:35.367 [[main]<elasticsearch] scroll - Query start
[INFO ] 2024-11-12 21:03:35.381 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}

Also my conf file is here;

  GNU nano 7.2                   /etc/logstash/conf.d/logstash-sample.conf                             
input {
  elasticsearch {
    hosts => ["http://myhost:9200"]
    user => "user"
    password => "pass"
    index => "catalog_products_new"
    query => '{"size": 100000, "sort": [{"_id": {"order": "desc"}}]}'  
    scroll => "5m"
    docinfo => true
  }
}

output {
  elasticsearch {
    cloud_id => "gardrops:dXMtY2VudHJhbDEuZ2NwLmNsb3VkLmVzLmlvOjQ0MyRkYzdkMmE3MzZkYTk0YjI5YmQzZmM1YzFk>
    cloud_auth => "username:password"
  }
}

/var/log/logstash/logstash-plain.log;

[2024-11-12T21:03:12,726][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.15.3) {:es_version=>8}
[2024-11-12T21:03:12,726][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2024-11-12T21:03:12,893][INFO ][logstash.outputs.elasticsearch][main] Data streams auto configuration (`data_stream => auto` or unset) resolved to `true`
[2024-11-12T21:03:12,920][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1500, "pipeline.sources"=>["/etc/logstash/conf.d/logstash-sample.conf"], :thread=>"#<Thread:0x54533edd /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-11-12T21:03:13,756][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.83}
[2024-11-12T21:03:14,470][INFO ][logstash.inputs.elasticsearch][main] `search_api => auto` resolved to `scroll` {:elasticsearch=>"7.17.23"}
[2024-11-12T21:03:14,472][INFO ][logstash.inputs.elasticsearch][main] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2024-11-12T21:03:14,473][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2024-11-12T21:03:14,479][INFO ][logstash.inputs.elasticsearch.scroll][main][83bb52d6246590c5e86c1127485f8de3191b176a0e3bb0d64f1e51ba066d47f6] Query start
[2024-11-12T21:03:14,486][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}

what is the problem???

This largely looks correct and the logs indicate everything is starting.

You could enable debug logging for more information.

Things to check would include making sure that your source index is correct and to manually specify the target index on the output and see if the index exists in your cloud cluster

1 Like