No incoming data to Logstash Output from Elastic Agents - Only Elasticsearch ouptut works

8.7 stack here

After I setup a logstash output in Fleet, and set a policy to use that logstash ouptut for integrations, no data comes to it basically.

When I switch the output for integrations to Elasticsearch instead of logstash, I get data.

I meticulously followed the steps here :

elastic-agent-pipeline-secure.conf

input {
  elastic_agent {
    port => 5044
    ssl => true
    ssl_certificate_authorities => ["/etc/logstash/certs/ca/ca.crt"]
    ssl_certificate => "/etc/logstash/certs/logstash.crt"
    ssl_key => "/etc/logstash/certs/logstash.pkcs8.key"
    ssl_verify_mode => "force_peer"
  }
}

output {
  elasticsearch {
    hosts => "https://172.0.0.1:9200"
    api_key => "xxxxxxxxxxxxxxxxxxxx:xxxxxxxxxxxxxxxxxxxx"
    data_stream => true
    ssl => true
    cacert => "/etc/logstash/certs/http_ca.crt"
  }
}

made sure to reference this pipeline in pipelines.yml, and that it is indeed running without errors by checking on the logs

systemd[1]: Started logstash.
logstash[46688]: Using bundled JDK: /usr/share/logstash/jdk
logstash[46688]: Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[INFO ][logstash.runner          ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.7.0", "jruby.version"=>"jruby 9.3.10.0 (2.6.8) 2023-02-01 107b2e6697 OpenJDK 64-Bit Server VM 17.0.6+10 on 17.0.6+10 +indy +jit [x86_64-linux]"}
Apr 30 11:36:31 ip-172.0.0.1 logstash[46688]: [2023-04-30T11:36:31,693][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[INFO ][org.reflections.Reflections] Reflections took 77 ms to scan 1 urls, producing 132 keys and 462 values
[INFO ][logstash.javapipeline    ] Pipeline `elastic-agent-pipeline` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[INFO ][logstash.outputs.elasticsearch][elastic-agent-pipeline] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://172.0.0.1:9200"]}
[INFO ][logstash.outputs.elasticsearch][elastic-agent-pipeline] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://1172.0.0.1:9200/]}}
[WARN ][logstash.outputs.elasticsearch][elastic-agent-pipeline] Restored connection to ES instance {:url=>"https://172.0.0.1:9200/"}
[INFO ][logstash.outputs.elasticsearch][elastic-agent-pipeline] Elasticsearch version determined (8.7.0) {:es_version=>8}
[WARN ][logstash.outputs.elasticsearch][elastic-agent-pipeline] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[WARN ][logstash.outputs.elasticsearch][elastic-agent-pipeline] Elasticsearch Output configured with `ecs_compatibility => v8`, which resolved to an UNRELEASED preview of version 8.0.0 of the Elastic Common Schema. Once ECS v8 and an updated release of this plugin are publicly available, you will need to update this plugin to resolve this warning.
[INFO ][logstash.javapipeline    ][elastic-agent-pipeline] Starting pipeline {:pipeline_id=>"elastic-agent-pipeline", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/etc/logstash/elastic-agent-pipeline.conf"], :thread=>"#<Thread:0x22b594ab@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[INFO ][logstash.javapipeline    ][elastic-agent-pipeline] Pipeline Java execution initialization time {"seconds"=>0.54}
[INFO ][logstash.inputs.beats    ][elastic-agent-pipeline] Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ][logstash.javapipeline    ][elastic-agent-pipeline] Pipeline started {"pipeline.id"=>"elastic-agent-pipeline"}
[logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:"elastic-agent-pipeline"], :non_running_pipelines=>[]}
[INFO ][org.logstash.beats.Server][elastic-agent-pipeline][cacd334d179fceaa1a9b9ebb723484a4a8781322e61ccf95432df67590c08c91] Starting server on port: 5044

I made sure the Elasticsearch / Fleet Server / Logstash machine is accessible from Elastic Agent Host and that the port is effectively open

ubuntu@elastic-agent-host:/var/log$ telnet 172.0.0.1 5044
Trying 172.0.0.1 ...
Connected to 172.0.0.2 .
Escape character is '^]'.

.

Now I can wait for hours nothing happens.
No Data Stream, No input, Nothing.

As you can see below.
So this basically tell us the Agent is not sending anything to logstash, or at least logstash is not seeinganything received on its end.

root@mainhost# curl -XGET localhost:9600/_node/stats/events?pretty
{
  "host" : "ip-172-0-0-1",
  "version" : "8.7.0",
  "http_address" : "127.0.0.1:9600",
  "id" : "eac3fd1c-a24a-4e51-8b52-e41aa73b4628",
  "name" : "ip-172-30-2-238",
  "ephemeral_id" : "e458c193-b795-43d9-b0b8-5cfa90288070",
  "status" : "green",
  "snapshot" : false,
  "pipeline" : {
    "workers" : 8,
    "batch_size" : 125,
    "batch_delay" : 50
  },
  "events" : {
    "in" : 0,
    "filtered" : 0,
    "out" : 0,
    "duration_in_millis" : 92,
    "queue_push_duration_in_millis" : 0
  }

.

But as soon as I switch the output for integrations to Elasticsearch default, data streams pops out.

This is depressing

.

.

I am really out of ideas here.
Any help is welcome

A couple of things worth noting (or not) :

I skipped a couple of fields during the cert process (I dont have an FQDN)

./bin/elasticsearch-certutil cert \
  --name logstash \
  --ca-cert /path/to/ca/ca.crt \
  --ca-key /path/to/ca/ca.key \
  --dns your.host.name.here \ <<<skipped this
  --ip 192.0.2.1 \            <<<and this
  --pem

Also, I installed the Agent with the --insecure switch.

sudo ./elastic-agent install --url=https://172.0.0.1:8220 --enrollment-token=nobodyreallycaresaboutthisbutyouknow== --insecure

Honestly I dont think this has any impact but stating it out of completeness, as I went rigourously by the documentation for each and every step.

Hi @mehdi-lamrani

Typo? 5044 vs 5440

Logstash input

input {
  elastic_agent {
    port => 5044

Vs agent output in your screenshot

5440

Just for reference 5044 is the normal beats port

The agent logs / filebeat logs should be showing the bad connection

Are you implying that I spent hours going through each and every corner of the documentation and carefully redoing each and every step, while overlooking a simple port type ? Because if that's what you are implying you would be dead right.

I suppose that's why weekends are made to rest :relieved:

Just to make sure I was braindead while configuring this, I added an integration with a new dataset name and I started getting data right away.

I apologize to the community and for wasting your precious time.
I need to get some time away from the computer now lol.

Thanks a bunch.

1 Like

PS :

I beg you pardon ? I tried to scan for this like everywhere. problem is I was being "blind" and not seeing logs that could point out to this direction.
Where do I find those agent logs / filebeat logs exactly ? (sorry if the question sounds stupid)

No worries. We call it pair programming :slight_smile:

You are not the first nor the last!

The agent logs are a bit buried.... I'm assuming nothing showed up in the normal agent logs?

So then you have to drill down under the filebeat portion under the agent and find the logs there.

You also may have been able to run the agent diagnostics. Did you look at any of that?

I will sympathize as I still find the agent a bit hard to debug and dev test cycle as well.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.