Logstash not receiving messages when started via docker-compose (Docker is ok)

I've encountered strange behaviour when running Logstash via docker-compose which I have been unable to figure out. Logstash starts (seemingly) fine, but it does not receive any events from Beats or other sources. Events coming from Beats only show up when the connection is terminated (e.g., Filebeat is closed).

I created a different topic for this here initially, but have since narrowed it down to something in docker-compose: Filebeat not sending to Logstash until it’s terminated

So what I've found so far, is that starting Logstash via docker works fine. Beats events come in and show up in the console as they arrive.

There's nothing of note in the logstash configuration, I have a very simple pipeline for testing.

Contents of pipeline/logstash.conf:

input {
  beats {
    port => 5044
  }
  tcp {
    port => 5045
  }
}

output {
  stdout {
    codec => json
  }
}

Contents of config/logstash.yml:

http.host: "0.0.0.0"
config.reload.automatic: true

Contents of config/pipelines.yml:

- pipeline.id: main
  path.config: "/usr/share/logstash/pipeline"

My docker command line:

docker run --rm -it \
    -v ~/docker/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro \
    -v ~/docker/logstash/config/pipelines.yml:/usr/share/logstash/config/pipelines.yml:ro \
    -v ~/docker/logstash/pipeline/:/usr/share/logstash/pipeline/:ro \
    -p 5044:5044 -p 5045:5045 \
    docker.elastic.co/logstash/logstash:8.1.3

Starting it this way, and doing a test via telnet to port 5045 to the host:

tester@taklu20tester:~$ docker run --rm -it \
    docker.elastic.co/logstash/logstash:8.1.3>     -v ~/docker/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro \
>     -v ~/docker/logstash/config/pipelines.yml:/usr/share/logstash/config/pipelines.yml:ro \
>     -v ~/docker/logstash/pipeline/:/usr/share/logstash/pipeline/:ro \
>     -p 5044:5044 -p 5045:5045 \
>     docker.elastic.co/logstash/logstash:8.1.3
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2022-04-30T18:09:11,261][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[2022-04-30T18:09:11,266][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.1.3", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.14.1+1 on 11.0.14.1+1 +indy +jit [linux-x86_64]"}
[2022-04-30T18:09:11,267][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/]
[2022-04-30T18:09:11,281][INFO ][logstash.settings        ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2022-04-30T18:09:11,286][INFO ][logstash.settings        ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2022-04-30T18:09:11,435][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"34c8b5dc-6802-4929-8ee8-3ec77fd5ae5f", :path=>"/usr/share/logstash/data/uuid"}
[2022-04-30T18:09:11,954][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2022-04-30T18:09:12,204][INFO ][org.reflections.Reflections] Reflections took 33 ms to scan 1 urls, producing 120 keys and 419 values
[2022-04-30T18:09:12,511][INFO ][logstash.codecs.json     ] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2022-04-30T18:09:12,532][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2022-04-30T18:09:12,592][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x7bad7df0 run>"}
[2022-04-30T18:09:13,022][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.43}
[2022-04-30T18:09:13,039][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2022-04-30T18:09:13,099][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-04-30T18:09:13,104][INFO ][logstash.inputs.tcp      ][main][2c8957accb6b975a85b579aeb3cb599d7acb674ec6abbef5de93c0e32a24affa] Starting tcp input listener {:address=>"0.0.0.0:5045", :ssl_enable=>false}
[2022-04-30T18:09:13,151][INFO ][org.logstash.beats.Server][main][615ed699d809d9ae88cac98f51ed4d45f76d84f1287fdbfa6556a827df3784e6] Starting server on port: 5044
[2022-04-30T18:09:13,166][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
{"@timestamp":"2022-04-30T18:09:55.843543Z","@version":"1","message":"hello!\r","event":{"original":"hello!\r"}}{"@timestamp":"2022-04-30T18:10:08.540233Z","@version":"1","message":"hello again...\r","event":{"original":"hello again...\r"}}

Now, I wanted to actually start Logstash from inside my docker-compose file so I can run it alongside Elasticsearch and Kibana for quick evaluation/testing purposes. Also, I wanted to build my own custom Logstash image (via Dockerfile) so I can include some plugins I wanted to try out.

The docker-compose.yml equivalent of the above is, I think:

version: "3.3"

services:
  logstash:
    image: docker.elastic.co/logstash/logstash:8.1.3
    ports:
      - 5044:5044
      - 5045:5045
    volumes:
      - ~/docker/logstash/pipeline/:/usr/share/logstash/pipeline/:ro
      - ~/docker/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
      - ~/docker/logstash/config/pipelines.yml:/usr/share/logstash/config/pipelines.yml:ro

When I start it via "docker-compose up", the logs show everything seems ok:

logstash_1  | Using bundled JDK: /usr/share/logstash/jdk
logstash_1  | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
logstash_1  | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1  | [2022-04-30T18:03:23,273][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
logstash_1  | [2022-04-30T18:03:23,279][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.1.3", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.14.1+1 on 11.0.14.1+1 +indy +jit [linux-x86_64]"}
logstash_1  | [2022-04-30T18:03:23,280][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/]
logstash_1  | [2022-04-30T18:03:23,948][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
logstash_1  | [2022-04-30T18:03:24,209][INFO ][org.reflections.Reflections] Reflections took 34 ms to scan 1 urls, producing 120 keys and 419 values
logstash_1  | [2022-04-30T18:03:24,512][INFO ][logstash.codecs.json     ] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
logstash_1  | [2022-04-30T18:03:24,530][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
logstash_1  | [2022-04-30T18:03:24,599][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x7b6e7f7a run>"}
logstash_1  | [2022-04-30T18:03:24,995][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.39}
logstash_1  | [2022-04-30T18:03:25,015][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
logstash_1  | [2022-04-30T18:03:25,094][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
logstash_1  | [2022-04-30T18:03:25,122][INFO ][logstash.inputs.tcp      ][main][2c8957accb6b975a85b579aeb3cb599d7acb674ec6abbef5de93c0e32a24affa] Starting tcp input listener {:address=>"0.0.0.0:5045", :ssl_enable=>false}
logstash_1  | [2022-04-30T18:03:25,143][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
logstash_1  | [2022-04-30T18:03:25,156][INFO ][org.logstash.beats.Server][main][615ed699d809d9ae88cac98f51ed4d45f76d84f1287fdbfa6556a827df3784e6] Starting server on port: 5044

But when started this way, the behaviour I mentioned in my previous post (Filebeat not sending to Logstash until it’s terminated) happens... nothing comes in from Beats until the Beats process is killed. I also have a simple TCP input set up (port 5045), to test via telnet, and I see nothing when entering text that way.

Starting via the "docker" command line, I see events coming in to Logstash on the console immediately.

I have also tried this on a clean Ubuntu 20.04 VM, and I have noticed the same behaviour (I'm trying to eliminate as many variables as possible here, including something messed up with my docker-compose installation).

Is this a legitimate bug, or have I missed something obvious in my docker-compose file/setup?

Thanks!
-Alex

Hi @alex_london

I just reproduced... its your output codec...

put in either of these and you will get your desired results.

output {
  stdout {
    codec => json_lines
  }
}

or

output {
  stdout {
    codec => rubydebug
  }
}

Note from the docs...

json
This codec may be used to decode (via inputs) and encode (via outputs) full JSON messages. If the data being sent is a JSON array at its root multiple events will be created (one per element).

If you are streaming JSON messages delimited by \n then see the json_lines codec.

Encoding will result in a compact JSON representation (no line terminators or indentation)

I suspect since this is not line oriented that it is buffering...so it is treating all the lines as 1 giant stream and I suspect it is buffering it all... until you hit some buffer limit, or close the connection ... or perhaps something else... either way you want "line or message" oriented.

Why it works from command line not docker-compose not sure... but if your reason is to move forward I would use on of the 2 codecs I suggested to keep moving...

Hey @stephenb

I don't think I would have figured this out in a million years, thanks that does indeed work! It's a strange one for sure...

-Alex

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.