Logstash UDP input losing messages

I'm using the official logstash container (7.17.0) to ingest some logs over UDP (~8k a minute, with spikes probably around 30k a minute). This is running in AWS Fargate (at the moment 2 containers running, with a load balancer), and I've played around with the CPU/Mem (2 CPU, 4GB each) but still seem to be losing messages. I've looked to increase the read buffer but unfortunately you can't change kernel parameters in Fargate. Everything is default at the moment, and this is my input:

input {

  udp {
    port => 1415
    codec => json
    receive_buffer_bytes => 1048576 # Unable to set receive_buffer_bytes to desired size. Requested 1048576 but obtained 212992 bytes.
    queue_size => 100000
    tags => ["udp"]
    id => "udp-http-input"
    ecs_compatibility => "v1"
    add_field => {
      "[logstash][tags]" => ["tag"]
    }
  }

Here is the output of one events API:

{
  "host" : "redacted",
  "version" : "7.17.0",
  "http_address" : "0.0.0.0:9600",
  "id" : "redacted",
  "name" : "redacted",
  "ephemeral_id" : "redacted",
  "status" : "green",
  "snapshot" : false,
  "pipeline" : {
    "workers" : 2,
    "batch_size" : 125,
    "batch_delay" : 50
  },
  "events" : {
    "in" : 3384281,
    "filtered" : 3383779,
    "out" : 3022670,
    "duration_in_millis" : 120930626,
    "queue_push_duration_in_millis" : 120620117
  }
}

I guess what I'm asking is, what should I try? Are there certain things I should look at on the API? The documentation doesn't explain how to actually interpret the output. Any help greatly appreciated.

One other note: when I test using TCP I get all the messages (although I can only test on a small subset of the messages (~7k over about 10 mins)

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.