I have setup logserver using ELK in aws ubuntu instance. And I need to send syslogs from my production server to logserver. For that filebeat is installed in production server.
Did you allowed connections to this instance from your server/network? Also in your filebeat set logging on and set it to debug so you can see what happens. Here's an example of logging configuration for filebeat:
logging:
to_files: true
You need to create a rule that allows your production server/network external ip addresse(s) to connect to port 5040 on this instance, because none of the protocols listed by you is not used in this case.
2016-02-23T22:23:18-05:00 DBG Disable stderr logging
2016-02-23T22:23:18-05:00 DBG Initializing output plugins
2016-02-23T22:23:18-05:00 INFO GeoIP disabled: No paths were set under output.geoip.paths
2016-02-23T22:23:18-05:00 INFO Max Retries set to: 3
2016-02-23T22:23:18-05:00 DBG connect
2016-02-23T22:23:18-05:00 INFO Activated logstash as output plugin.
2016-02-23T22:23:18-05:00 DBG Create output worker
2016-02-23T22:23:18-05:00 DBG No output is defined to store the topology. The server fields might not be filled.
2016-02-23T22:23:18-05:00 INFO Publisher name: xdrgprod.ontash.local
2016-02-23T22:23:18-05:00 INFO Flush Interval set to: 1s
2016-02-23T22:23:18-05:00 INFO Max Bulk Size set to: 2048
2016-02-23T22:23:18-05:00 DBG create bulk processing worker (interval=1s, bulk size=2048)
2016-02-23T22:23:18-05:00 INFO Init Beat: filebeat; Version: 1.1.1
2016-02-23T22:23:18-05:00 INFO filebeat sucessfully setup. Start running.
2016-02-23T22:23:18-05:00 INFO Registry file set to: /var/lib/filebeat/registry
2016-02-23T22:23:18-05:00 INFO Loading registrar data from /var/lib/filebeat/registry
2016-02-23T22:23:19-05:00 DBG Set idleTimeoutDuration to 5s
2016-02-23T22:23:19-05:00 DBG File Configs: [/var/log/syslog]
2016-02-23T22:23:19-05:00 INFO Set ignore_older duration to 24h0m0s
2016-02-23T22:23:19-05:00 INFO Set scan_frequency duration to 10s
2016-02-23T22:23:19-05:00 INFO Input type set to: log
2016-02-23T22:23:19-05:00 INFO Set backoff duration to 1s
2016-02-23T22:23:19-05:00 INFO Set max_backoff duration to 10s
2016-02-23T22:23:19-05:00 INFO force_close_file is disabled
2016-02-23T22:23:19-05:00 DBG Waiting for 1 prospectors to initialise
2016-02-23T22:23:19-05:00 INFO Starting prospector of type: log
2016-02-23T22:23:19-05:00 DBG exclude_files: []
2016-02-23T22:23:19-05:00 DBG scan path /var/log/syslog
2016-02-23T22:23:19-05:00 DBG Check file for harvesting: /var/log/syslog
2016-02-23T22:23:19-05:00 DBG Start harvesting unknown file: /var/log/syslog
2016-02-23T22:23:19-05:00 DBG Launching harvester on new file: /var/log/syslog
2016-02-23T22:23:19-05:00 DBG scan path /var/log/syslog
2016-02-23T22:23:19-05:00 DBG Check file for harvesting: /var/log/syslog
2016-02-23T22:23:19-05:00 DBG Update existing file for harvesting: /var/log/syslog
2016-02-23T22:23:19-05:00 DBG Not harvesting, file didn't change: /var/log/syslog
2016-02-23T22:23:19-05:00 DBG No pending prospectors. Finishing setup
2016-02-23T22:23:19-05:00 INFO All prospectors initialised with 0 states to persist
2016-02-23T22:23:19-05:00 INFO Starting Registrar
2016-02-23T22:23:19-05:00 INFO Start sending events to output
2016-02-23T22:23:19-05:00 DBG harvest: "/var/log/syslog" (offset snapshot:0)
2016-02-23T22:23:19-05:00 INFO Harvester started for file: /var/log/syslog
2016-02-23T22:23:19-05:00 DBG full line read
2016-02-23T22:23:19-05:00 DBG full line read
Can you try telnet the elastic server on port 5040 and see if you can connect from the server you are sending logs from. Also i've just noticed in your config file that you are NOT using beats as input:
input { tcp {
port => 5040
type => 'syslog'
}
}
if you are sending logs via filebeat this should look like this:
Sorry, I got a new issue with kibana.
kibana is not responding now. I can start kibana and its showing 'started'. But when checked the status, it is showing kibana is not running.
I checked kibana log, but there is nothing special for this case.
@ruflin I already posted the kibana issue in kibana forum.
That was a disk full error in aws EC2. I reinistalled elasticsearch and now kibana running.
But the issue in this thread still exists. I can't see any logs in kibana view. I made some modifications in logstash configuration.
input {
beats {
type => 'xdrgprod'
port => 5044
codec=> json{
charset => 'UTF-8'
}
}
}
filter {
if [type == 'xdrgprod'] {
grok {
match => ['TimeCreated', "Date\(%{NUMBER:timestamp}\)"]
}
date {
match => [ timestamp, "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
if [type == 'xdrgprod']{
elasticsearch {
hosts => ["172.30.0.28:9200"]
}
stdout {codec => rubydebug}
}
}
And logstash.log
{:timestamp=>"2016-02-29T09:12:39.156000+0000", :message=>"JSON parse failure. Falling back to plain-text", :error=>#<LogStash::Json::ParserError: Unrecognized token 'Feb': was expecting ('true', 'false' or 'null')
at [Source: [B@6d2b888a; line: 1, column: 5]>, :data=>"Feb 28 16:47:21 xdrgprod /usr/bin/filebeat[19210]: reader.go:138: End of file reached: /var/log/mysql.log; Backoff now.", :level=>:error}
{:timestamp=>"2016-02-29T09:12:39.156000+0000", :message=>"JSON parse failure. Falling back to plain-text", :error=>#<LogStash::Json::ParserError: Unrecognized token 'Feb': was expecting ('true', 'false' or 'null')
at [Source: [B@21bb5db7; line: 1, column: 5]>, :data=>"Feb 28 16:47:24 xdrgprod /usr/bin/filebeat[19210]: prospector.go:179: Start next scan", :level=>:error}
This last message is about logstash being shut down due to SIGTERM. Server being shutdown/rebooted? Anyone killing or restarting logstash?
Let's start with a very minimal config having filebeat forward to logstash and logstash print to console. We reconfigure the filebeat registry file for testing purpose. The registry_test file should be deleted after every single run of filebeat to have reproducible tests.
I tried your way with different port number . Regisrty file is
registry_file: /var/lib/filebeat/registry
But still no luck.
logstash log files...
logstash.log
{:timestamp=>"2016-03-01T07:25:11.879000+0000", :message=>"Beats Input: Remote connection closed", :peer=>"68.236.192.171:35429", :exception=>#<Lumberjack::Beats::Connection::ConnectionClosed: Lumberjack::Beats::Connection::ConnectionClosed wrapping: EOFError, End of file reached>, :level=>:warn}
logstash.log.1
{:timestamp=>"2016-03-01T05:14:38.264000+0000", :message=>"Beats input: The circuit breaker has detected a slowdown or stall in the pipeline, the input is closing the current connection and rejecting new connection until the pipeline recover.", :exception=>LogStash::Inputs::BeatsSupport::CircuitBreaker::HalfOpenBreaker, :level=>:warn}
{:timestamp=>"2016-03-01T05:14:44.721000+0000", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.