No logs after enable TLS

Hi, i have a ELK-cluster to log a firewall. That worked well until i enabled TLS/SSL. Now Kibana do not show new logs.

Here ist one of the config files:

input {
udp {
port => 514
type => firewall
}
}
filter {
if [type] == "firewall" {
mutate {
add_tag => ["fortigate"]
}
grok {
break_on_match => false
match => [ "message", "%{SYSLOG5424PRI:syslog_index}%{GREEDYDATA:message}" ]
overwrite => [ "message" ]
tag_on_failure => [ "failure_grok_fortigate" ]
}
kv { }
if [msg] {
mutate {
replace => [ "message", "%{msg}" ]
}
}
mutate {
convert => { "duration" => "integer" }
convert => { "rcvdbyte" => "integer" }
convert => { "rcvdpkt" => "integer" }
convert => { "sentbyte" => "integer" }
convert => { "sentpkt" => "integer" }
convert => { "cpu" => "integer" }
convert => { "disk" => "integer" }
convert => { "disklograte" => "integer" }
convert => { "fazlograte" => "integer" }
convert => { "mem" => "integer" }
convert => { "totalsession" => "integer" }
}
mutate {
add_field => [ "fgtdatetime", "%{date} %{time}" ]
add_field => [ "loglevel", "%{level}" ]
replace => [ "fortigate_type", "%{type}" ]
replace => [ "fortigate_subtype", "%{subtype}" ]
remove_field => [ "msg", "message", "date", "time", "eventtime" ]
}
date {
match => [ "fgtdatetime", "YYYY-MM-dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["https://x.x.x:9200", "https://x.x.x:9200", "https://x.x.x:9200"]
cacert => '/etc/logstash/config/certs/ca.crt'
#user => 'logstash_writer'
user => 'elastic'
password => 'xxx'
index => "fortinet-%{+YYYY.MM.dd}"
manage_template => false
}
}

If i start the logstash service with:

/usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/fortigate_nocomment.conf

this is the output:

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: Unknown module: org.jruby.dist specified to --add-opens
WARNING: Unknown module: org.jruby.dist specified to --add-opens
WARNING: Unknown module: org.jruby.dist specified to --add-opens
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2020-10-26T15:43:14,444][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.9.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10-post-Ubuntu-0ubuntu120.04 on 11.0.8+10-post-Ubuntu-0ubuntu120.04 +indy +jit [linux-x86_64]"}
[2020-10-26T15:43:15,189][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-10-26T15:43:17,690][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2020-10-26T15:43:17,695][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2020-10-26T15:43:19,460][INFO ][org.reflections.Reflections] Reflections took 70 ms to scan 1 urls, producing 22 keys and 45 values
[2020-10-26T15:43:19,834][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] ** WARNING ** Detected UNSAFE options in elasticsearch output configuration!
** WARNING ** You have enabled encryption but DISABLED certificate verification.
** WARNING ** To make sure your data is secure change :ssl_certificate_verification to true
[2020-10-26T15:43:19,963][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://logstash_system:xxxxxx@x.x.x:9200/, https://logstash_system:xxxxxx@x.x.x:9200/]}}
[2020-10-26T15:43:20,506][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"https://logstash_system:xxxxxx@x.x.x:9200/"}
[2020-10-26T15:43:20,518][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] ES Output version determined {:es_version=>7}
[2020-10-26T15:43:20,521][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-10-26T15:43:20,608][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"https://logstash_system:xxxxxx@x.x.x:9200/"}
[2020-10-26T15:43:20,669][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["https://x.x.x:9200", "https://x.x.x:9200"]}
[2020-10-26T15:43:20,701][WARN ][logstash.javapipeline    ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
[2020-10-26T15:43:20,880][INFO ][logstash.javapipeline    ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x91a5b34 run>"}
[2020-10-26T15:43:20,948][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx@x.x.x:9200/, https://elastic:xxxxxx@x.x.x:9200/, https://elastic:xxxxxx@x.x.x:9200/]}}
[2020-10-26T15:43:21,023][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://elastic:xxxxxx@x.x.x.:9200/"}
[2020-10-26T15:43:21,035][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2020-10-26T15:43:21,038][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-10-26T15:43:21,121][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://elastic:xxxxxx@x.x.x:9200/"}
[2020-10-26T15:43:21,331][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://elastic:xxxxxx@x.x.x:9200/"}
[2020-10-26T15:43:21,380][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://x.x.x:9200", "https://x.x.x.:9200", "https://x.x.x.:9200"]}
[2020-10-26T15:43:21,620][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/fortigate_nocomment.conf"], :thread=>"#<Thread:0x4f3d7a2d@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:54 run>"}
[2020-10-26T15:43:21,837][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>0.95}
[2020-10-26T15:43:21,955][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2020-10-26T15:43:22,259][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.64}
[2020-10-26T15:43:22,287][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-10-26T15:43:22,323][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
[2020-10-26T15:43:22,349][INFO ][logstash.inputs.udp      ][main][48ef0552a3fb3842b8737780564e4dfb17fff2b1031897971a81c0160247c5f5] Starting UDP listener {:address=>"0.0.0.0:514"}
[2020-10-26T15:43:22,453][INFO ][logstash.inputs.udp      ][main][48ef0552a3fb3842b8737780564e4dfb17fff2b1031897971a81c0160247c5f5] UDP listener started {:address=>"0.0.0.0:514", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[2020-10-26T15:43:22,647][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

Can someone please help me with this problem?
I don´t realy know how to start debugging this.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.