Error- read: connection reset by peer


(Amit) #1

Hi,

I am using filebeat with Logstash output. Please find below config files:

filebeat.yml

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

Each - is a prospector. Most options can be set at the prospector level, so

you can use different prospectors for various configurations.

Below are the prospector specific configurations.

  • input_type: log

    Paths that should be crawled and fetched. Glob based paths.

    paths:

    • /abc/xyz/authenticateb_metrics.log*

#================================ Outputs =====================================

Configure what outputs to use when sending the data collected by the beat.

Multiple outputs may be used.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:

Array of hosts to connect to.

#hosts: ["localhost:9200"]

Optional protocol and basic auth credentials.

#protocol: "https"
#username: "elastic"
#password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:

The Logstash hosts

#hosts: ["localhost:5044"]

hosts: ["10.xx.xx.xx:96xx"]

Optional SSL. By default is off.

List of root certificates for HTTPS server verifications

#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

Certificate for SSL client authentication

#ssl.certificate: "/etc/pki/client/cert.pem"

Client Certificate Key

#ssl.key: "/etc/pki/client/cert.key"


Logstash config file:

input {
beats {
type => abc
port => 96xx
client_connectivity_timeout => 120

}
}

filter {
grok { match => ["message", "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{GREEDYDATA}%{SPACE}TrxId:%{UUID:id}%{SPACE}%{GREEDYDATA}%{SPACE}%{WORD}.%{WORD:method}.%{WORD:class} %{WORD}: %{INT:milliseconds}"]}

mutate {
convert => {"milliseconds" => "integer"}
}
date {

           locale => "en"

           match => [ "timestamp", "yyyy-MM-dd mm:ss:SS,ZZZ", "ISO8601" ]

           target => "timestamp"

    

    }

}
output {
elasticsearch {
hosts => ["10.xx.xx.xx:92xx", "10.xx.xx.xx:92xx"]
index => "xyz-%{+YYYY.MM.dd}"
}

    stdout { codec => rubydebug }

}

I am getting error (/var/log/filebeat) :

2017-10-05T13:45:02-07:00 ERR Connecting error publishing events (retrying): Get http://10.xx.xx.xx:96xx: read tcp 10.xx.xx.xx:51576->10.xx.xx.xx:xxxx: read: connection reset by peer
2017-10-05T13:45:31-07:00 INFO Non-zero metrics in the last 30s: libbeat.es.publish.read_errors=1 libbeat.es.publish.write_bytes=124
2017-10-05T13:45:34-07:00 ERR Connecting error publishing events (retrying): Get http://10.xx.xx.xx:96xx: read tcp 10.xx.xx.xx:51588->10.xx.xx.xx:96xx: read: connection reset by peer
2017-10-05T13:46:01-07:00 INFO Non-zero metrics in the last 30s: libbeat.es.publish.read_errors=1 libbeat.es.publish.write_bytes=124

I have configured same port on Filebeat and Logstash congig file.

Please help me to solve this...

Thanks!
-Amit


(Mark Walkom) #2

Can you please edit your post and use the </> button to properly format the various code sections, it's really hard to read as it is :slight_smile:


(Amit) #3

Preformatted text

filebeat.yml

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

Each - is a prospector. Most options can be set at the prospector level, so
you can use different prospectors for various configurations.
Below are the prospector specific configurations.
input_type: log

Paths that should be crawled and fetched. Glob based paths.
paths:

/abc/xyz/authenticateb_metrics.log*
#================================ Outputs =====================================

Configure what outputs to use when sending the data collected by the beat.
Multiple outputs may be used.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:

Array of hosts to connect to.
#hosts: ["localhost:9200"]

Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:

The Logstash hosts
#hosts: ["localhost:5044"]

hosts: ["10.xx.xx.xx:96xx"]

Optional SSL. By default is off.
List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"

Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"

==========================================================================
indent preformatted text by 4 spaces

Logstash config file:

input {
beats {
type => abc
port => 96xx
client_connectivity_timeout => 120

}
}

filter {
grok { match => ["message", "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{GREEDYDATA}%{SPACE}TrxId:%{UUID:id}%{SPACE}%{GREEDYDATA}%{SPACE}%{WORD}.%{WORD:method}.%{WORD:class} %{WORD}: %{INT:milliseconds}"]}

mutate {
convert => {"milliseconds" => "integer"}
}
date {

       locale => "en"

       match => [ "timestamp", "yyyy-MM-dd mm:ss:SS,ZZZ", "ISO8601" ]

       target => "timestamp"



}

}
output {
elasticsearch {
hosts => ["10.xx.xx.xx:92xx", "10.xx.xx.xx:92xx"]
index => "xyz-%{+YYYY.MM.dd}"
}

stdout { codec => rubydebug }

}

=================================================================================
indent preformatted text by 4 spaces

I am getting error (/var/log/filebeat) :

2017-10-05T13:45:02-07:00 ERR Connecting error publishing events (retrying): Get http://10.xx.xx.xx:96xx: read tcp 10.xx.xx.xx:51576->10.xx.xx.xx:xxxx: read: connection reset by peer
2017-10-05T13:45:31-07:00 INFO Non-zero metrics in the last 30s: libbeat.es.publish.read_errors=1 libbeat.es.publish.write_bytes=124
2017-10-05T13:45:34-07:00 ERR Connecting error publishing events (retrying): Get http://10.xx.xx.xx:96xx: read tcp 10.xx.xx.xx:51588->10.xx.xx.xx:96xx: read: connection reset by peer
2017-10-05T13:46:01-07:00 INFO Non-zero metrics in the last 30s: libbeat.es.publish.read_errors=1 libbeat.es.publish.write_bytes=124

I have configured same port on Filebeat and Logstash congig file.

Please help me to solve this...

Thanks!
-Amit


(Amit) #4

Thanks for reply!


(Amit) #5

Please help me, Still not able to solve it.

Thanks!


(ruflin) #6
  • Do you see any errors in the Logstash log?
  • Are some of the events published to Logstash or none?
  • What does your setup look like? Do you have a Load balancer or similar between Filebeat and Logstash?
  • Can you share your FB/LS/ES version?

(Amit) #7

Hi Ruflin,

Thanks for reply!

There are no events published to logstash log. I didn't find any errors in logstash log.

I am not using Load balancer, Set up is Filebeat -> Logstash -> Elasticsearch.

logstash version -5.4.3, Elasticsearch Version: 5.5.1, filebeat version 5.6.2 (amd64)


(ruflin) #8

Could you simplify your LS config by just having the beats input and then write to disk. So the filter part and elasticsearch part is not there and then check if some events come in? This is to exclude potential other problems.

Could you share the full filebeat log?


(system) #9

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.