Hi
I am trying to start topbeat on my machine, but I am getting this error
Manage-test:/home/admin/beats/topbeat-1.3.1-x86_64# ./topbeat -e topbeat.yml
2016/11/01 03:09:41.819791 transport.go:125: ERR SSL client failed to connect with: read tcp 10.0.0.69:56836->54.214.224.161:5044: i/o timeout
2016/11/01 03:10:23.005931 transport.go:125: ERR SSL client failed to connect with: read tcp 10.0.0.69:56857->54.214.224.161:5044: i/o timeout
more topbeat.yml
output:
### Logstash as output
logstash:
# The Logstash hosts
hosts: ["kibana.xx.com:5044"]
# Number of workers per Logstash host.
#worker: 1
# The maximum number of events to bulk into a single batch window. The
# default is 2048.
#bulk_max_size: 2048
# Set gzip compression level.
#compression_level: 3
# Optional load balance the events between the Logstash hosts
#loadbalance: true
# For Packetbeat, the default is set to packetbeat, for Topbeat
# top topbeat and for Filebeat to filebeat.
#index: topbeat
# Optional TLS. By default is off.
tls:
# List of root certificates for HTTPS server verifications
#certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for TLS client authentication
#certificate: "/etc/pki/client/cert.pem"
certificateauthorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
# Client Certificate Key
#certificate_key: "/etc/pki/client/cert.key"
Also I started topbeat with debug mode
# ./topbeat -e -c topbeat.yml -d "*"
2016/11/01 04:40:59.373959 beat.go:156: DBG Initializing output plugins
2016/11/01 04:40:59.374059 geolite.go:24: INFO GeoIP disabled: No paths were set under output.geoip.paths
2016/11/01 04:40:59.375241 logstash.go:106: INFO Max Retries set to: 3
2016/11/01 04:40:59.375280 client.go:100: DBG connect
2016/11/01 04:41:29.560745 transport.go:125: ERR SSL client failed to connect with: read tcp 10.0.0.69:59152->54.214.224.161:5044: i/o timeout
2016/11/01 04:41:29.560786 outputs.go:126: INFO Activated logstash as output plugin.
2016/11/01 04:41:29.560823 publish.go:232: DBG Create output worker
2016/11/01 04:41:29.560883 publish.go:274: DBG No output is defined to store the topology. The server fields might not be filled.
2016/11/01 04:41:29.560927 publish.go:288: INFO Publisher name: vManage-test-PNC-1
2016/11/01 04:41:29.561521 beat.go:168: INFO Init Beat: topbeat; Version: 1.3.1
2016/11/01 04:41:29.561932 topbeat.go:88: DBG Init topbeat
2016/11/01 04:41:29.561974 topbeat.go:89: DBG Follow processes [".*"]
2016/11/01 04:41:29.561986 topbeat.go:90: DBG Period 10s
2016/11/01 04:41:29.561995 topbeat.go:91: DBG System statistics true
2016/11/01 04:41:29.562004 topbeat.go:92: DBG Process statistics true
2016/11/01 04:41:29.562013 topbeat.go:93: DBG File system statistics true
2016/11/01 04:41:29.562022 topbeat.go:94: DBG Cpu usage per core true
2016/11/01 04:41:29.562151 beat.go:194: INFO topbeat sucessfully setup. Start running.
2016/11/01 04:41:39.669662 publish.go:109: DBG Publish: {
"@timestamp": "2016-11-01T04:41:39.667Z",
"beat": {
"hostname": "vManage-test-PNC-1",
"name": "vManage-test-PNC-1"
In logstash log file I saw this
ester@elk:/var/log/logstash$ tail -f logstash.log
{:timestamp=>"2016-10-31T18:42:15.574000+0000", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}
{:timestamp=>"2016-10-31T18:43:20.643000+0000", :message=>"CircuitBreaker::rescuing exceptions", :name=>"Beats input", :exception=>LogStash::Inputs::Beats::InsertingToQueueTakeTooLong, :level=>:warn}
{:timestamp=>"2016-10-31T18:43:20.645000+0000", :message=>"Beats input: The circuit breaker has detected a slowdown or stall in the pipeline, the input is closing the current connection and rejecting new connection until the pipeline recover.",
My conf file is
tester@elk:/etc/logstash$ more syslog-elasticsearch.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
Any help will be appreciated.
Thanks