FIlebeat Alpha latest version ERR Failed to publish events caused by: read tcp


(Ravi) #1

filebeat can not publish events can some one advice me to proceed further

fIlebeat.yml

# Deleted actual paths in post 
filebeat:
  prospectors:
    -
      paths:
      - /var/log/t.log
      - /var/log/puppet/puppet.log
      - /var/log/messages
      - /var/log.log
      - /var/.log
      - /opt/error.log
      - /opt/.log
      - /opt/t.log
      - /opt/log

      input_type: log

      document_type: syslog




registry_file: /var/lib/filebeat/registry

output:
  logstash:
    hosts: ["10.251.33.130:5044"]

logging:
to_files: true
files:
path: /var/log/filebeat
name: filebeat
level: error

Logstash configuration in put

input {
beats {
port => 5044
type => "syslog"
ssl => true # Commented
ssl_certificate => "/etc/pki/tls/certs/filebeat.crt" # Commented 
ssl_key => "/etc/pki/tls/private/filebeat.key" # Commented
}
}

Error code form client server which is running centos 6.5

 tail -f /var/log/filebeat/filebeat
2016-10-25T19:22:49Z INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_errors=1 libbeat.logstash.published_but_not_acked_events=2043
2016-10-25T19:23:19Z INFO No non-zero metrics in the last 30s
2016-10-25T19:23:49Z INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.write_bytes=361 libbeat.logstash.call_count.PublishEvents=1
2016-10-25T19:23:53Z ERR Failed to publish events caused by: read tcp 10.251.171.238:51568->10.251.33.130:5044: i/o timeout
2016-10-25T19:23:53Z INFO Error publishing events (retrying): read tcp 10.251.171.238:51568->10.251.33.130:5044: i/o timeout
2016-10-25T19:24:19Z INFO Non-zero metrics in the last 30s: libbeat.logstash.published_but_not_acked_events=2043 libbeat.logstash.publish.read_errors=1
2016-10-25T19:24:49Z INFO No non-zero metrics in the last 30s
2016-10-25T19:25:19Z INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.write_bytes=366 libbeat.logstash.call_count.PublishEvents=1
2016-10-25T19:25:23Z ERR Failed to publish events caused by: read tcp 10.251.171.238:51783->10.251.33.130:5044: i/o timeout
2016-10-25T19:25:23Z INFO Error publishing events (retrying): read tcp 10.251.171.238:51783->10.251.33.130:5044: i/o timeout
2016-10-25T19:25:49Z INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_errors=1 libbeat.logstash.published_but_not_acked_events=2043
2016-10-25T19:26:19Z INFO No non-zero metrics in the last 30s
2016-10-25T19:26:49Z INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=1 libbeat.logstash.publish.write_bytes=367
2016-10-25T19:26:53Z ERR Failed to publish events caused by: read tcp 10.251.171.238:52008->10.251.33.130:5044: i/o timeout
2016-10-25T19:26:53Z INFO Error publishing events (retrying): read tcp 10.251.171.238:52008->10.251.33.130:5044: i/o timeout
2016-10-25T19:27:19Z INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_errors=1 libbeat.logstash.published_but_not_acked_events=2043
2016-10-25T19:27:49Z INFO No non-zero metrics in the last 30s
2016-10-25T19:28:19Z INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.write_bytes=366 libbeat.logstash.call_count.PublishEvents=1
2016-10-25T19:28:23Z ERR Failed to publish events caused by: read tcp 10.251.171.238:52226->10.251.33.130:5044: i/o timeout
2016-10-25T19:28:23Z INFO Error publishing events (retrying): read tcp 10.251.171.238:52226->10.251.33.130:5044: i/o timeout
2016-10-25T19:28:49Z INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_errors=1 libbeat.logstash.published_but_not_acked_events=2043
2016-10-25T19:29:19Z INFO No non-zero metrics in the last 30s
2016-10-25T19:29:49Z INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=1 libbeat.logstash.publish.write_bytes=367
2016-10-25T19:29:53Z ERR Failed to publish events caused by: read tcp 10.251.171.238:52443->10.251.33.130:5044: i/o timeout
2016-10-25T19:29:53Z INFO Error publishing events (retrying): read tcp 10.251.171.238:52443->10.251.33.130:5044: i/o timeout
2016-10-25T19:30:19Z INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_errors=1 libbeat.logstash.published_but_not_acked_events=2043
2016-10-25T19:30:49Z INFO No non-zero metrics in the last 30s
2016-10-25T19:31:19Z INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=1 libbeat.logstash.publish.write_bytes=416
2016-10-25T19:31:23Z ERR Failed to publish events caused by: read tcp 10.251.171.238:52659->10.251.33.130:5044: i/o timeout
2016-10-25T19:31:23Z INFO Error publishing events (retrying): read tcp 10.251.171.238:52659->10.251.33.130:5044: i/o timeout

(ruflin) #2

Can you check manually if you can reach your LS host from the filebeat machine?


(Ravi) #3

Thanks for reply

[root@localhost# tracepath 10.251.33.130/5044
1: localhost (10.251.171.238) 0.045ms pmtu 9000
1: 10.251.171.237 (10.251.171.237) 0.942ms
1: 10.251.171.237 (10.251.171.237) 0.911ms
2: 10.252.14.57 (10.252.14.57) 0.896ms
3: 10.252.15.201 (10.252.15.201) 0.884ms
4: 10.252.19.77 (10.252.19.77) 0.923ms
5: 10.252.19.82 (10.252.19.82) 1.253ms
6: 10.252.19.86 (10.252.19.86) 2.005ms
7: te-5-2-cr01.xxxxxxxx (10.252.19.122) 3.576ms
8: te-5-3-cr01.xxxxxxx (10.252.19.29) 3.800ms pmtu 8192
8: te-5-4-cc38ula-01.cc38.xxxxxxxt (10.252.19.30) 3.739ms
9: te-5-4-cc38ula-01.cc38.xxxxxx10.252.19.30) 3.713ms pmtu 1500
9: 10.252.17.74 (10.252.17.74) 3.232ms
10: logstashhost(10.251.33.130) 2.969ms reached
Resume: pmtu 1500 hops 10 back 55
[root@localhost]#


(ruflin) #4

You defined some certificates on the Logstash side, but not on the beats side?


(Ravi) #5

Actually I have commented on logstash side if I put # letters goes huge size so I have mentioned 3 Commented


(ruflin) #6

Not sure I get your answer. You posted on the Logstash forum? If yes, please link it here so people know about it.


(Steffen Siering) #7

Please disable ssl on logstash side and try to connect via telnet.


(Ravi) #8

input {
beats {
port => 5044
type => "syslog"

}
}

output {
elasticsearch {
hosts => ["10.251.33.130:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}


(Steffen Siering) #9

What's this config supposed todo?


(Ravi) #10

Thanks for getting back to me Actually I'm trying to get couple of logs paths form client server through filebeat 5 which i mentioned above i make sure i have proper connection between them did tracepath telnet everything successful,

Filebeat yml

filebeat:
  prospectors:
    -
      paths:
      - /var/log/teakd.log
      - /var/log/puppet/puppet.log
      - /var/log/messages
      - /var/log/xmt.log
      - /var/log/watchdog_teak.log
      - /opt/trafficserver/var/log/trafficserver/error.log
      - /opt/trafficserver/var/log/trafficserver/manager.log
      - /opt/trafficserver/var/log/trafficserver/custom_ats_2.log
      - /opt/trafficserver/var/log/trafficserver/diags.log

      input_type: log

      document_type: syslog




registry_file: /var/lib/filebeat/registry

output:
  logstash:
    hosts: ["10.251.33.130:5044"]

logging:
to_files: true
files:
path: /var/log/filebeat
name: filebeat
level: error

Logstash:

input {
  beats {
    port => 5044
    type => "syslog"
  }
}
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
    grok {
        match => {
          "hostname" => "%{DATA}-%{DATA}-%{DATA:site}-%{DATA}"
        }
    }

}
output {
  elasticsearch {
    hosts => ["10.251.33.130:9200"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
} 

Logs from client server

Starting filebeat: 2016/10/31 16:12:18.745787 beat.go:264: INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2016/10/31 16:12:18.745806 beat.go:174: INFO Setup Beat: filebeat; Version: 5.0.0
2016/10/31 16:12:18.745855 logstash.go:90: INFO Max Retries set to: 3
2016/10/31 16:12:18.745887 logp.go:219: INFO Metrics logging every 30s
2016/10/31 16:12:18.745900 outputs.go:106: INFO Activated logstash as output plugin.
2016/10/31 16:12:18.745958 publish.go:291: INFO Publisher name: filebeat
2016/10/31 16:12:18.746801 async.go:63: INFO Flush Interval set to: 1s
2016/10/31 16:12:18.746812 async.go:64: INFO Max Bulk Size set to: 2048
Config OK
                                                           [  OK  ]
[root@ccdn-ats-tk-40406-01 rguttu001c]# tail -f /var/log/filebeat/filebeat
2016-10-31T16:12:18Z INFO Starting prospector of type: log
2016-10-31T16:12:18Z INFO Harvester started for file: /var/log/teakd.log
2016-10-31T16:12:18Z INFO Harvester started for file: /var/log/puppet/puppet.log
2016-10-31T16:12:18Z INFO Harvester started for file: /var/log/messages
2016-10-31T16:12:18Z INFO Harvester started for file: /opt/trafficserver/var/log/trafficserver/custom_ats_2.log
2016-10-31T16:12:18Z INFO Harvester started for file: /var/log/xmt.log
2016-10-31T16:12:18Z ERR Connecting error publishing events (retrying): dial tcp 10.251.33.130:5044: getsockopt: connection refused
2016-10-31T16:12:19Z ERR Connecting error publishing events (retrying): dial tcp 10.251.33.130:5044: getsockopt: connection refused
2016-10-31T16:12:21Z ERR Connecting error publishing events (retrying): dial tcp 10.251.33.130:5044: getsockopt: connection refused
2016-10-31T16:12:25Z ERR Connecting error publishing events (retrying): dial tcp 10.251.33.130:5044: getsockopt: connection refused
2016-10-31T16:12:33Z ERR Connecting error publishing events (retrying): dial tcp 10.251.33.130:5044: getsockopt: connection refused
2016-10-31T16:12:48Z INFO Non-zero metrics in the last 30s: filebeat.harvester.started=5 filebeat.harvester.open_files=5 libbeat.publisher.published_events=2034 filebeat.harvester.running=5
2016-10-31T16:12:49Z ERR Connecting error publishing events (retrying): dial tcp 10.251.33.130:5044: getsockopt: connection refused

(Steffen Siering) #11

From which machine did you run telnet and traceroute?

The connection refused happens if server is not accepting a TCP connection from client. E.g. if service is not running or firewall is blocking the connection.

Also check the syntax of your config files. At least the filebeat one looks a little off.
E.g. it should be filebeat.registry_file, not registry_file. The logging config seems out of place too.

For logstash maybe start with a very simple config to verify data are pushed:

input {
  beats {
    port => 5044
  }
}
output {
  stdout {}
}

(Ravi) #12

From Client server I'm trying to do telnet to logstash server tried with simple file logstash configuration flushed all the firewalls on both servers ..


(Steffen Siering) #13

Any results? Can you give detailed information what exactly you did try + outcomes for each single experiment?

To be honest, with the information you're posting so far without explanations I have almost no idea what exactly you're doing. This makes it very hard, trying to help.


(Ravi) #14

Hi steffens thanks so much for following up I'm able to resolve issue I have changed the path of logstash output.

output {
elasticsearch {
hosts => ["10.251.33.130:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

to

output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["10.251.33.130:9200"]
}
}

Then selected index as logstash in kibana, But not sure what exactly happened in above changes..

Thanks again..


(Steffen Siering) #15

That's weird. If you introduce the index and document_type options to your elasticsearch output it's failing? Maybe a problem with the mapping in elasticsearch (check elasticsearch and logstash logs) rejecting documents due to type mismatches?


(system) #16

This topic was automatically closed after 21 days. New replies are no longer allowed.