Filebeat log sending problem

Server Side
#java -version
java version "1.8.0_101"

#elasticsearch version


#bin/kibana --version

#bin/logstash --version
logstash 5.0.0-beta1


input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"


filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]


output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"

#bin/logstash -f /etc/logstash/conf.d/ --config.test_and_exit
Configuration OK

#bin/logstash-plugin install logstash-input-beats
Installation successful

#bin/logstash-plugin update logstash-input-beats
Updated logstash-input-beats 3.1.4 to 3.1.6

#bin/logstash-plugin install logstash-output-elasticsearch
Installation successful

#bin/logstash-plugin update logstash-output-elasticsearch
Updated logstash-output-elasticsearch 5.1.1 to 5.1.2


[2016-10-05T14:19:50,717][WARN ][logstash.outputs.elasticsearch] Elasticsearch output attempted to sniff for new connections but cannot. No living connections are detected. Pool contains the following current URLs {:url_info=>{}}
[2016-10-05T14:19:51,952][ERROR][] Exception: not an SSL/TLS record: 325700000001324300000..................f7000000ffffbf7b794e
[2016-10-05T14:19:55,718][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available connections>}

client side

#bin/filebeat --version
filebeat version 5.0.0-beta1 (amd64), libbeat 5.0.0-beta1


            - /var/log/auth.log
            - /var/log/syslog
          #  - /var/log/*.log
          input_type: log
          document_type: syslog
      registry_file: /var/lib/filebeat/registry
        hosts: ["my_elk_server_ip:5044"]
        bulk_max_size: 1024
          certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
        rotateeverybytes: 10485760 # = 10MB

#bin/filebeat -c /etc/filebeat/filebeat.yml -e -v

2016/10/05 08:24:23.710760 beat.go:204: INFO filebeat start running.
2016/10/05 08:24:23.710783 registrar.go:66: INFO Registry file set to: /var/lib/filebeat/registry
2016/10/05 08:24:23.710824 registrar.go:99: INFO Loading registrar data from /var/lib/filebeat/registry
2016/10/05 08:24:23.711077 prospector.go:106: INFO Starting prospector of type: log
2016/10/05 08:24:23.711312 log.go:60: INFO Harvester started for file: /var/log/syslog
2016/10/05 08:24:23.711406 spooler.go:64: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2016/10/05 08:24:23.711432 registrar.go:178: INFO Starting Registrar
2016/10/05 08:24:23.711473 sync.go:41: INFO Start sending events to output
2016/10/05 08:24:23.716637 log.go:60: INFO Harvester started for file: /var/log/auth.log
2016/10/05 08:24:23.747492 sync.go:85: ERR Failed to publish events caused by: EOF
2016/10/05 08:24:23.747737 single.go:91: INFO Error publishing events (retrying): EOF
2016/10/05 08:24:53.710713 logp.go:230: INFO Non-zero metrics in the last 30s: filebeat.harvester.running=2 filebeat.harvester.open_files=2 filebeat.harvester.started=2 libbeat.logstash.published_but_not_acked_events=5120 libbeat.logstash.call_count.PublishEvents=5 libbeat.logstash.publish.write_bytes=2305 libbeat.publisher.published_events=2046 libbeat.logstash.publish.read_errors=5

Can anybody help me for this problem ?
Thanks in advance.

post is quite hard to read. Can you properly format it? beats config files need proper indentation. Without formatting it's hard to see if something is wrong there.

Nothing to see in filebeat logs besides EOF. But check your logstash logs. It's also complaining about elasticsearch output not working. Could it be the logstash pipeline is blocked due blocking outputs?

[WARN ][logstash.outputs.elasticsearch] Elasticsearch output attempted to sniff for new connections but cannot. No living connections are detected. Pool contains the following current URLs {:url_info=>{}}
[WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available connections>}

[ERROR][] Exception: not an SSL/TLS record: 32570000000132430000010a785e5c90c14ac3401086d33759fe739aec26da968542bd7a160f1a916dbb491792ddb0333196d27797d428e871be19be99f98bc724491649b2385db0eb2c9ba361037d019f7b0b0d3a531b1aa4d85bc3d0a85d6bf7d630ae29425d93656855aa14ffe79def077e9f697b537496c83493f5a18fa2d808b5d1b2d0b210c37ef03c88f8bdeda8c56b88ae715e50a87934d16e2bfc342b081a9f6d2417fcb6c22653ab4c56109fcbde1d27a0ee6e95f375d8563831f73acfc771cc664376085d8537416c22ff7e768137dd74dba175d64ffc1488ffb26b8a1dbbce129bae874621d56aa9e452de3fc9b52ea52ecb6cb52e5e9082c2100f932eff30316f4393cf495ebf020000ffffc7b57586

[INFO ][o.e.n.Node ] [rTbwjik] started
[INFO ][o.e.g.GatewayService ] [rTbwjik] recovered [1] indices into cluster_state
[INFO ][o.e.c.r.a.AllocationService] [rTbwjik] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][3]] ...]).

Looks like tls / ssl issue. tls is not called ssl in the 5.0 release. You can use the migration script in your download package under scripts to migrate the config.

Can you please format your posts with 3 ticks ` before and after to make them more readable as @steffens requested before ?

with 5.0.0 beta1 the tls section has been renamed to ssl for consistency with other projects in elastic stack.

how can I use migration script?

#cd /usr/share/filebeat/scripts


usage: [-h] [--dry] file error: too few arguments

THanks all for replying

You just printed out the usage docs.

I can only repeat: Please format your posts.

This topic was automatically closed after 21 days. New replies are no longer allowed.