Logtash is not accepting/recieving syslog sent by linux server

I have configure syslogng to sent syslog to server1 and port 514. but the logtash installed on server1 is not recieving the syslogs on port 514.

i have defined input also for the port 514. Pls assist

@magnusbaeck

I have logtash on server1 and syslog-ng on server 2. I have configured syslogng conf to send logs to server1 on port 514. tcp

but how do i check whether the logtash on server1 is receiving those logs. in my case it seems it is not receiving . i tried with blank input also . can you let em know hat exactly i am misisng here.

Check if logstash is running with this command

sudo systemctl status logstash.service

If its not running activate it with

sudo systemctl start logstash.service

You didn't specify your exact error so i'm assuming you have correctly set-up the servers.
If the servers are on the same network and have an active internet connection they should work.

In addition make sure your configurations are correct because you know... Yaml is sensitive

Thanks one more information, my logtash is running on Windows and syslogng is on linux.

pls find logtash log:

[2016-11-18T00:03:55,472][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:2514"}
[2016-11-18T00:03:55,642][INFO ][logstash.inputs.udp ] Starting UDP listener {:address=>"0.0.0.0:2515"}
[2016-11-18T00:03:56,115][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://localhost:9200"]}}
[2016-11-18T00:03:56,115][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2016-11-18T00:03:56,971][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2016-11-18T00:03:57,001][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["localhost:9200"]}
[2016-11-18T00:03:57,031][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2016-11-18T00:03:57,046][INFO ][logstash.pipeline ] Pipeline main started
[2016-11-18T00:03:57,285][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

Pls find elastic log :slight_smile:

[2016-11-17T20:50:45,474][INFO ][o.e.n.Node ] [sgtummpre815] initializing ...
[2016-11-17T20:50:45,900][INFO ][o.e.e.NodeEnvironment ] [sgtummpre815] using [1] data paths, mounts [[OSDisk (C:)]], net usable_space [140.3gb], net total_space [465.2gb], spins? [unknown], types [NTFS]
[2016-11-17T20:50:45,900][INFO ][o.e.e.NodeEnvironment ] [sgtummpre815] heap size [1.9gb], compressed ordinary object pointers [true]
[2016-11-17T20:50:46,025][INFO ][o.e.n.Node ] [sgtummpre815] version[5.0.0], pid[12900], build[253032b/2016-10-26T04:37:51.531Z], OS[Windows 7/6.1/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_111/25.111-b14]
[2016-11-17T20:50:48,611][INFO ][o.e.p.PluginsService ] [sgtummpre815] loaded module [aggs-matrix-stats]
[2016-11-17T20:50:48,611][INFO ][o.e.p.PluginsService ] [sgtummpre815] loaded module [ingest-common]
[2016-11-17T20:50:48,611][INFO ][o.e.p.PluginsService ] [sgtummpre815] loaded module [lang-expression]
[2016-11-17T20:50:48,611][INFO ][o.e.p.PluginsService ] [sgtummpre815] loaded module [lang-groovy]
[2016-11-17T20:50:48,627][INFO ][o.e.p.PluginsService ] [sgtummpre815] loaded module [lang-mustache]
[2016-11-17T20:50:48,627][INFO ][o.e.p.PluginsService ] [sgtummpre815] loaded module [lang-painless]
[2016-11-17T20:50:48,627][INFO ][o.e.p.PluginsService ] [sgtummpre815] loaded module [percolator]
[2016-11-17T20:50:48,627][INFO ][o.e.p.PluginsService ] [sgtummpre815] loaded module [reindex]
[2016-11-17T20:50:48,627][INFO ][o.e.p.PluginsService ] [sgtummpre815] loaded module [transport-netty3]
[2016-11-17T20:50:48,627][INFO ][o.e.p.PluginsService ] [sgtummpre815] loaded module [transport-netty4]
[2016-11-17T20:50:48,643][INFO ][o.e.p.PluginsService ] [sgtummpre815] no plugins loaded
[2016-11-17T20:50:55,289][INFO ][o.e.n.Node ] [sgtummpre815] initialized
[2016-11-17T20:50:55,289][INFO ][o.e.n.Node ] [sgtummpre815] starting ...
[2016-11-17T20:51:04,957][INFO ][o.e.t.TransportService ] [sgtummpre815] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2016-11-17T20:51:09,174][INFO ][o.e.c.s.ClusterService ] [sgtummpre815] new_master {sgtummpre815}{pFO_sShuR8S5UnQ9UV_pCg}{l8f2VfTUTAmI-JuQJ8CBLA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2016-11-17T20:51:10,197][INFO ][o.e.g.GatewayService ] [sgtummpre815] recovered [1] indices into cluster_state
[2016-11-17T20:51:11,169][INFO ][o.e.c.r.a.AllocationService] [sgtummpre815] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
[2016-11-17T20:51:15,269][INFO ][o.e.h.HttpServer ] [sgtummpre815] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2016-11-17T20:51:15,272][INFO ][o.e.n.Node ] [sgtummpre815] started

pls find kibana log:

C:\Elastic\kibana-5.0.0-windows-x86\kibana-5.0.0-windows-x86\bin>kibana.bat
log [16:02:49.699] [info][status][plugin:kibana@5.0.0] Status changed from uninitialized to green - Ready
log [16:02:49.839] [info][status][plugin:elasticsearch@5.0.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [16:02:49.899] [info][status][plugin:console@5.0.0] Status changed from uninitialized to green - Ready
log [16:02:55.153] [error][status][plugin:elasticsearch@5.0.0] Status changed from yellow to red - Request Timeout after 3000ms
log [16:02:55.163] [info][status][plugin:timelion@5.0.0] Status changed from uninitialized to green - Ready
log [16:02:55.183] [info][listening] Server running at http://localhost:5601
log [16:02:55.193] [error][status][ui settings] Status changed from uninitialized to red - Elasticsearch plugin is red
log [16:02:57.748] [info][status][plugin:elasticsearch@5.0.0] Status changed from red to green - Kibana index ready
log [16:02:57.758] [info][status][ui settings] Status changed from red to green - Ready

You are listening on port 2514 (TCP) and 2515 (UDP), not 514.

You can double checkt your listeners by typing the following in your shell:

The output should display that you are listening on what protocols/ports that match 514

Yes i am listening to 2514 and syslog also sending to 2514 .

I just changed the port on both end to see may be old 514 is being used by
some other application so changed the port on sender and reciever side.

Are you able to telnet to the port from the syslog machine to the logstash server to test connectivity?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.