8.7 stack here
After I setup a logstash output in Fleet, and set a policy to use that logstash ouptut for integrations, no data comes to it basically.
When I switch the output for integrations to Elasticsearch instead of logstash, I get data.
I meticulously followed the steps here :
elastic-agent-pipeline-secure.conf
input {
elastic_agent {
port => 5044
ssl => true
ssl_certificate_authorities => ["/etc/logstash/certs/ca/ca.crt"]
ssl_certificate => "/etc/logstash/certs/logstash.crt"
ssl_key => "/etc/logstash/certs/logstash.pkcs8.key"
ssl_verify_mode => "force_peer"
}
}
output {
elasticsearch {
hosts => "https://172.0.0.1:9200"
api_key => "xxxxxxxxxxxxxxxxxxxx:xxxxxxxxxxxxxxxxxxxx"
data_stream => true
ssl => true
cacert => "/etc/logstash/certs/http_ca.crt"
}
}
made sure to reference this pipeline in pipelines.yml, and that it is indeed running without errors by checking on the logs
systemd[1]: Started logstash.
logstash[46688]: Using bundled JDK: /usr/share/logstash/jdk
logstash[46688]: Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[INFO ][logstash.runner ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"8.7.0", "jruby.version"=>"jruby 9.3.10.0 (2.6.8) 2023-02-01 107b2e6697 OpenJDK 64-Bit Server VM 17.0.6+10 on 17.0.6+10 +indy +jit [x86_64-linux]"}
Apr 30 11:36:31 ip-172.0.0.1 logstash[46688]: [2023-04-30T11:36:31,693][INFO ][logstash.runner ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[INFO ][org.reflections.Reflections] Reflections took 77 ms to scan 1 urls, producing 132 keys and 462 values
[INFO ][logstash.javapipeline ] Pipeline `elastic-agent-pipeline` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[INFO ][logstash.outputs.elasticsearch][elastic-agent-pipeline] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://172.0.0.1:9200"]}
[INFO ][logstash.outputs.elasticsearch][elastic-agent-pipeline] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://1172.0.0.1:9200/]}}
[WARN ][logstash.outputs.elasticsearch][elastic-agent-pipeline] Restored connection to ES instance {:url=>"https://172.0.0.1:9200/"}
[INFO ][logstash.outputs.elasticsearch][elastic-agent-pipeline] Elasticsearch version determined (8.7.0) {:es_version=>8}
[WARN ][logstash.outputs.elasticsearch][elastic-agent-pipeline] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[WARN ][logstash.outputs.elasticsearch][elastic-agent-pipeline] Elasticsearch Output configured with `ecs_compatibility => v8`, which resolved to an UNRELEASED preview of version 8.0.0 of the Elastic Common Schema. Once ECS v8 and an updated release of this plugin are publicly available, you will need to update this plugin to resolve this warning.
[INFO ][logstash.javapipeline ][elastic-agent-pipeline] Starting pipeline {:pipeline_id=>"elastic-agent-pipeline", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/etc/logstash/elastic-agent-pipeline.conf"], :thread=>"#<Thread:0x22b594ab@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[INFO ][logstash.javapipeline ][elastic-agent-pipeline] Pipeline Java execution initialization time {"seconds"=>0.54}
[INFO ][logstash.inputs.beats ][elastic-agent-pipeline] Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ][logstash.javapipeline ][elastic-agent-pipeline] Pipeline started {"pipeline.id"=>"elastic-agent-pipeline"}
[logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:"elastic-agent-pipeline"], :non_running_pipelines=>[]}
[INFO ][org.logstash.beats.Server][elastic-agent-pipeline][cacd334d179fceaa1a9b9ebb723484a4a8781322e61ccf95432df67590c08c91] Starting server on port: 5044
I made sure the Elasticsearch / Fleet Server / Logstash machine is accessible from Elastic Agent Host and that the port is effectively open
ubuntu@elastic-agent-host:/var/log$ telnet 172.0.0.1 5044
Trying 172.0.0.1 ...
Connected to 172.0.0.2 .
Escape character is '^]'.
.
Now I can wait for hours nothing happens.
No Data Stream, No input, Nothing.
As you can see below.
So this basically tell us the Agent is not sending anything to logstash, or at least logstash is not seeinganything received on its end.
root@mainhost# curl -XGET localhost:9600/_node/stats/events?pretty
{
"host" : "ip-172-0-0-1",
"version" : "8.7.0",
"http_address" : "127.0.0.1:9600",
"id" : "eac3fd1c-a24a-4e51-8b52-e41aa73b4628",
"name" : "ip-172-30-2-238",
"ephemeral_id" : "e458c193-b795-43d9-b0b8-5cfa90288070",
"status" : "green",
"snapshot" : false,
"pipeline" : {
"workers" : 8,
"batch_size" : 125,
"batch_delay" : 50
},
"events" : {
"in" : 0,
"filtered" : 0,
"out" : 0,
"duration_in_millis" : 92,
"queue_push_duration_in_millis" : 0
}
.
But as soon as I switch the output for integrations to Elasticsearch default, data streams pops out.
This is depressing
.
.
I am really out of ideas here.
Any help is welcome