No network syslogs in logstash

Hi all,

I need to build something similar to this blog post but I'm having hard times while trying to make it work.

I've installed logstash and configured it following the guide on the official website. I've tested to load the "shakespeare" data example and visualize it in kibana and everything worked.

My configuration file looks like this:

input {
    udp {
       port => 514
       type => syslog

output {
    elasticsearch { hosts => [""] }
    stdout { }

I run it with the bin/logstash -f /etc/logstash/conf.d/sample.conf command, I obtain this output:

root@gabriele:/usr/share/logstash#  bin/logstash -f /etc/logstash/conf.d/sample.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/ Using default config which logs to console
16:27:10.257 [[main]<udp] INFO  logstash.inputs.udp - Starting UDP listener {:address=>""}
16:27:10.636 [[main]<udp] INFO  logstash.inputs.udp - UDP listener started {:address=>"", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
16:27:11.653 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[]}}
16:27:11.656 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>, :path=>"/"}
16:27:12.093 [[main]-pipeline-manager] WARN  logstash.outputs.elasticsearch - Restored connection to ES instance {:url=>#<URI::HTTP:0x2e3d8271 URL:>}
16:27:12.108 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Using mapping template from {:path=>nil}
16:27:13.063 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
16:27:13.093 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x5d073a3e URL://>]}
16:27:13.107 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
16:27:13.157 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
16:27:13.917 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9602}

Anyway, nothing seems to happen if I generate and send syslogs from my network devices.

Can anyone help me to troubleshoot this?
Also, when logstash receives the syslogs, where does it store them? I have several files into /var/log/logstash but I cannot register them as Index patterns on kibana UI.

Thanks in advance,


Can you try using the syslog input and see if that works:

Logstash doesn't store any of the messages it received on disk, they are kept in memory. If you're not seeing any data show up I'd check to see if you can reach port 514 on tcp or udp from your network devices. The syslog input binds both tcp and udp.

It worked, thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.