Hi,
We are using Logstash 8.3.0. We are using syslog output with persistent queue. Please see following configuration.
logstash.yml:
----
http.host: "0.0.0.0"
http.port: 9600
log.level: "info"
pipeline.workers: 2
pipeline.batch.size: 2048
pipeline.batch.delay: 50
path.logs: /opt/logstash/resource
pipeline.ecs_compatibility: disabled
pipelines.yml:
----
- pipeline.id: logstash
queue.type: persisted
queue.max_bytes: 1024mb
path.config: "/opt/logstash/config/logstash.conf"
- pipeline.id: opensearch
queue.type: persisted
queue.max_bytes: 1024mb
path.config: "/opt/logstash/config/searchengine.conf"
- pipeline.id: syslog
queue.type: persisted
queue.max_bytes: 1024mb
path.config: "/opt/logstash/config/syslog_output.conf"
syslog_output.conf:
----
input { pipeline { address => "syslog_pipeline" } }
filter {
if [facility] == "log audit"
or [facility] == "security/authorization messages"
or "-privacy-" in [metadata][category]
{
if [extra_data][asi][log_plane] == "alarm" {
drop{}
}
}
else {
drop{}
}
output {
syslog {
host => "rsyslog"
port => 514
protocol => tcp
rfc => rfc5424
use_labels => false
appname => "%{appname}"
priority => "%{priority}"
message => "%{message}"
sourcehost => "%{sourcehost}"
procid => "%{[metadata][proc_id]}"
msgid => "%{[metadata][category]}"
}
}
When the syslog sever is down, the log events are not getting stored in the syslog persistent queue. Could you please check and suggest.
system
(system)
January 6, 2023, 2:28pm
2
OpenSearch/OpenDistro are AWS run products and differ from the original Elasticsearch and Kibana products that Elastic builds and maintains. You may need to contact them directly for further assistance.
(This is an automated response from your friendly Elastic bot. Please report this post if you have any suggestions or concerns )
leandrojmp
(Leandro Pereira)
January 6, 2023, 3:13pm
3
What do you have in Logstash logs? Please share any log that indicates that.
Also, did you check the size of the queue path? Your queue should store the events until it is full, in your case you set it to a maximum 1 GB
.
We do not see any logs in the logstash logs during the issue.
Normally, when syslog server is down, logstash will keep trying to connect to syslog server and we see following log in logstash.
{"version": "1.1.0", "timestamp": "2023-01-10T06:03:17.823Z", "severity": "warning", "service_id": "eric-log-transformer", "metadata" : {"namespace": "xferdsh", "pod_name": "eric-log-transformer-75c84c8864-fbd8z", "container_name": "logtransformer"}, "message": "[logstash.outputs.syslog] syslog tcp output exception: closing, reconnecting and resending event {:host=>'rsyslog', :port=>514, :exception=>#<SocketError: initialize: name or service not known>,
But, in this case we do not see such errors.
The pipeline statistics shows, there is enough free space in the syslog queue and stored events count shows as 0.
"syslog" : {
--
"path" : "/opt/logstash/data/queue/syslog",
"storage_type" : "ext4"
},
"events_count" : 0,
"queue_size_in_bytes" : 4929,
"max_queue_size_in_bytes" : 1073741824
},
When we try to send more log events (i.e more than 5000 log events), we see some log events are getting stored in the queue. But when we try with less log events (i.e less than 2000 log events), logs are not stored in the queue.
system
(system)
Closed
February 7, 2023, 7:17am
5
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.