Hi All,
We are using Logstash 6.3.1 and recently we have been facing issues with getting the log flowing from Filebeat agents. Attaching the configuration and log samples.
We have increased the heap memory from 4GB to 6GB, 8GB and ultimately 12 GB and we are still facing this issue as logstash stops receiving the logs in about 15 minutes after every restart.
This issue is with a pipeline using input tcp. We also have a pipeline with input fro syslog, and it is parsed just fine.
Logstash log:
[2021-01-04T07:33:35,865][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 12766298444, max: 12771524608)
at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:640) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:594) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:764) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:740) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:244) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:226) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:146) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:324) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:185) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:176) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:137) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:125) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) [logstash-input-tcp-5.0.9.jar:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [logstash-input-tcp-5.0.9.jar:?]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [logstash-input-tcp-5.0.9.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
[2021-01-04T07:33:35,865][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 12766298444, max: 12771524608)
at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:640) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:594) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:764) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:740) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:244) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:226) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:146) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:324) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:185) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:176) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:137) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:125) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) ~[logstash-input-tcp-5.0.9.jar:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) ~[logstash-input-tcp-5.0.9.jar:?]
Logstash Pipeline configuration and statefulset
lle-uat.conf:
----
input {
beats {
port => 5045
ssl => true
ssl_certificate_authorities => ["/mnt/ca.cer"]
ssl_certificate => "/mnt/server.cer"
ssl_key => "/mnt/server.key"
ssl_verify_mode => "force_peer"
}
}
filter {
if "majescopas" in [tags] {
grok {
match => {"message" => "%{TIMESTAMP_ISO8601:log_timestamp}%{WORD:ErrorCode}:\s%{GREEDYDATA}\s%{GREEDYDATA}%{NUMBER:Threshold}\s%{GREEDYDATA}"}
}
}
if "IIBTransactions" in [tags] {
json {
source => "message"
}
}
if "boundarytimepas" in [tags] {
grok {
match => {"message" => "%{TIMESTAMP_ISO8601:log_timestamp}\s%{LOGLEVEL}\s\[%{GREEDYDATA}]\s\(%{GREEDYDATA}\)\s%{GREEDYDATA}\s%{GREEDYDATA:Hostname}\s%{GREEDYDATA}\s%{GREEDYDATA}\s%{GREEDYDATA}\s%{GREEDYDATA}\s%{GREEDYDATA:RequestFormat}\s%{GREEDYDATA}=%{WORD:Component}\s%{GREEDYDATA}=%{GREEDYDATA:Process}\s%{GREEDYDATA}\s%{GREEDYDATA}=%{GREEDYDATA:Threshold}\s%{GREEDYDATA}=%{GREEDYDATA}"}
}
}
}
output {
if "UAT" in [tags] {
elasticsearch {
user => "logstash"
password => "logstash"
hosts => "elasticsearch:9200"
index => "logstash-lle-uat-%{+YYYY.MM.dd}"
}
}
}
lle-dpuat.conf:
----
input {
tcp {
port => 5046
type => syslog
}
udp {
port => 5046
type => syslog
}
}
filter {
grok {
match => {"message" => "\<%{GREEDYDATA}\>%{TIMESTAMP_ISO8601}\s%{DATA:Tag}\s%{GREEDYDATA}"}
}
grok {
match => {"message" => "\<%{NUMBER}\>%{TIMESTAMP_ISO8601}\s%{DATA:Tag}\s\[%{GREEDYDATA}\]\[%{GREEDYDATA}\]\[%{GREEDYDATA}\]\s%{DATA}\(%{DATA:mpgw}\):\s%{DATA}\(%{GREEDYDATA}\)\[%{IP:IncomingIP}\]\s%{DATA}\(%{DATA:gtid}\):\s%{GREEDYDATA},\s\[%{DATA:URL}\]"}
}
}
output {
if "UATDPLOGS" in [Tag] {
elasticsearch {
user => "logstash"
password => "logstash"
hosts => "elasticsearch:9200"
index => "logstash-lle-uatdp-%{+YYYY.MM.dd}"
}
}
}
Logstash Statefulset:
Name: logstash
Namespace: elk
CreationTimestamp: Fri, 08 Feb 2019 20:24:44 +0530
Selector: app=logstash
Labels: app=logstash
Annotations:
Replicas: 1 desired | 1 total
Update Strategy: OnDelete
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=logstash
Containers:
logstash:
Image: docker.elastic.co/logstash/logstash:6.3.1
Port: 5044/TCP
Host Port: 0/TCP
Limits:
cpu: 8
memory: 16Gi
Environment:
LS_JAVA_OPTS: -Xmx12g -Xms12g
Mounts:
/mnt from logstash-certs (rw)
/usr/share/logstash/config/logstash.yml from logstash-config (rw,path="logstash.yml")
/usr/share/logstash/config/pipelines.yml from pipeline-config (rw,path="pipelines.yml")
/usr/share/logstash/pipeline from pipelines (rw)
Volumes:
logstash-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: logstash-config
Optional: false
logstash-certs:
Type: Secret (a volume populated by a Secret)
SecretName: logstash-certs
Optional: false
pipelines:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: pipelines
Optional: false
pipeline-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: pipeline-config
Optional: false
Volume Claims:
Events:
Appreciate any help to help troubleshoot this issue. Please let me know if more input is required from me?
Regards,
Pavan