kvaga
January 18, 2019, 11:53am
1
client - filebeat-6.5.4-1.x86_64
Log - logstash-6.5.4-1.noarch
logs from client
2019-01-18T13:56:20.033+0300 ERROR logstash/async.go:256 Failed to publish events caused by: write tcp 10.129.10.8:34714->10.129.10.7:5044: write: connection reset by peer
2019-01-18T13:56:20.033+0300 DEBUG [logstash] logstash/async.go:116 close connection
2019-01-18T13:56:21.033+0300 ERROR pipeline/output.go:121 Failed to publish events: write tcp 10.129.10.8:34714->10.129.10.7:5044: write: connection reset by peer
2019-01-18T13:56:21.033+0300 INFO pipeline/output.go:95 Connecting to backoff(async(tcp://10.129.10.7:5044))
2019-01-18T13:56:21.033+0300 DEBUG [logstash] logstash/async.go:111 connect
2019-01-18T13:56:21.035+0300 INFO pipeline/output.go:105 Connection to backoff(async(tcp://10.129.10.7:5044)) established
2019-01-18T13:56:21.035+0300 DEBUG [logstash] logstash/async.go:159 3 events out of 3 events sent to logstash host 10.129.10.7:5044. Continue sending
logs from LS server
[2019-01-18T13:56:21,035][DEBUG][org.logstash.beats.ConnectionHandler] 1c400199: batches pending: true
[2019-01-18T13:56:21,038][DEBUG][logstash.codecs.plain ] config LogStash::Codecs::Plain/@id = "plain_f6bf0dc8-6c79-448d-9431-02e244c9471a"
[2019-01-18T13:56:21,038][DEBUG][logstash.codecs.plain ] config LogStash::Codecs::Plain/@enable_metric = true
[2019-01-18T13:56:21,039][DEBUG][logstash.codecs.plain ] config LogStash::Codecs::Plain/@charset = "UTF-8"
[2019-01-18T13:56:21,047][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.129.10.7:5044, remote: 10.129.10.8:34723] Received a new payload
[2019-01-18T13:56:21,047][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.129.10.7:5044, remote: 10.129.10.8:34723] Sending a new message for the listener, sequence: 1
[2019-01-18T13:56:21,050][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.129.10.7:5044, remote: 10.129.10.8:34723] Sending a new message for the listener, sequence: 2
[2019-01-18T13:56:21,051][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.129.10.7:5044, remote: 10.129.10.8:34723] Sending a new message for the listener, sequence: 3
[2019-01-18T13:56:21,053][DEBUG][org.logstash.beats.BeatsHandler] 1c400199: batches pending: false
no errors when logstash starting
Tek_Chand
(Tek Chand)
January 18, 2019, 12:18pm
2
@kvaga ,
Can you please share your filebeat.yml
file configuration.
Thanks.
kvaga
January 18, 2019, 12:23pm
3
filebeat.yml
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /opt/app/log/tomcat/access*.log
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["10.129.10.7:5044"]
#================================ Procesors =====================================
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
Tek_Chand
(Tek Chand)
January 18, 2019, 12:29pm
4
@kvaga ,
Your configuration seems fine. Please check the connectivity from your filebeat system to logstash server using below command:
telnet your_logstash_server_ip 5044
Also try to increase the client_inactivity_timeout
in your logstash.
Thanks.
kvaga
January 18, 2019, 12:34pm
5
#telnet kibana 5044
Trying 10.20.44.23...
Connected to kibana.
Escape character is '^]'.
logstash.conf
input {
beats {
client_inactivity_timeout => 3000
port => 5044
ssl => false
}
}
watching...
kvaga
January 18, 2019, 12:54pm
6
now error is gone (i hope)
how can i copy config filebeat 5.5 to 6.5 version ?
prospectors:
-
paths:
- /opt/app/log/tomcat/access*.log
input_type: log
document_type: access_log_stage
-
paths:
- /opt/app/log/tomcat/console*.log
input_type: log
document_type: console_log_stage
multiline:
pattern: '^*\|[[:space:]]*at|^*\|[[:space:]]Caused by:|^*\|[[:space:]]*\.\.\.[[:space:]]'
negate: false
match: after
max_lines: 500
timeout: 5s
input_type log -> - type: log
what about
document_type:
and multiline ?
Tek_Chand
(Tek Chand)
January 21, 2019, 4:55am
8
@kvaga ,
You can uninstall filebeat5.5 and install filebeat6.5 again. Its better way may be some changes in configuration of 5.5 and 6.5.
Multiline are used when you logs are in multiline and want to manage them (treat as single line log).
I think document_type
is deprecated in Filebeat6.5. Please refer the below link with example.
elasticsearch, filebeat
Thanks.
kvaga
January 21, 2019, 10:54am
9
now my filebeat.yml inputs
#=========================== Filebeat inputs =============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /opt/app/log/tomcat/access*.log
#- c:\programdata\elasticsearch\logs\*
fields:
service: access_log_prestage
and now when I`m tryng to start filebeats got error
019-01-21T13:53:37.991+0300 ERROR instance/beat.go:800 Exiting: error initializing publisher: missing required field accessing 'output.logstash.hosts'
Exiting: error initializing publisher: missing required field accessing 'output.logstash.hosts'
but in filebeat.yml is
#----------------------------- Logstash output --------------------------------
output.logstash:
The Logstash hosts
hosts: ["10.129.10.7:5044"]
Tek_Chand
(Tek Chand)
January 21, 2019, 10:57am
10
@kvaga ,
kvaga:
enabled: false
You need to set it to true and its showing false in above configuration:
enable: true
Tek_Chand
(Tek Chand)
January 21, 2019, 11:07am
12
@kvaga ,
,
This error is due to some indentation issue in your output.logstash part.
Please verify it again. Its should be look like below:
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["your_logstash_server_ip:5044"]
bulk_max_size: 1024
Thanks.
kvaga
January 21, 2019, 11:12am
13
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["10.129.10.7:5044"]
bulk_max_size: 1024
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Procesors =====================================
start log:
2019-01-21T14:11:22.006+0300 INFO instance/beat.go:278 Setup Beat: filebeat; Version: 6.5.4
2019-01-21T14:11:22.006+0300 DEBUG [beat] instance/beat.go:299 Initializing output plugins
2019-01-21T14:11:22.008+0300 DEBUG [filters] add_cloud_metadata/add_cloud_metadata.go:160 add_cloud_metadata: starting to fetch metadata, timeout=3s
2019-01-21T14:11:22.382+0300 DEBUG [filters] add_cloud_metadata/add_cloud_metadata.go:192 add_cloud_metadata: received disposition for qcloud after 374.667586ms. result=[provider:qcloud, error=failed requesting qcloud metadata: Get http://metadata.tencentyun.com/meta-data/instance-id: dial tcp: lookup metadata.tencentyun.com on 8.8.8.8:53: no such host, metadata={}]
2019-01-21T14:11:25.008+0300 DEBUG [filters] add_cloud_metadata/add_cloud_metadata.go:192 add_cloud_metadata: received disposition for az after 3.000282498s. result=[provider:az, error=failed requesting az metadata: Get http://169.254.169.254/metadata/instance/compute?api-version=2017-04-02: dial tcp 169.254.169.254:80: i/o timeout, metadata={}]
2019-01-21T14:11:25.008+0300 DEBUG [filters] add_cloud_metadata/add_cloud_metadata.go:199 add_cloud_metadata: timed-out waiting for all responses
2019-01-21T14:11:25.009+0300 DEBUG [filters] add_cloud_metadata/add_cloud_metadata.go:163 add_cloud_metadata: fetchMetadata ran for 3.000816623s
2019-01-21T14:11:25.009+0300 INFO add_cloud_metadata/add_cloud_metadata.go:319 add_cloud_metadata: hosting provider type not detected.
2019-01-21T14:11:25.009+0300 DEBUG [processors] processors/processor.go:66 Processors: add_host_metadata=[netinfo.enabled=[false]], add_cloud_metadata=null
2019-01-21T14:11:25.010+0300 ERROR instance/beat.go:800 Exiting: error initializing publisher: missing required field accessing 'output.logstash.hosts'
Exiting: error initializing publisher: missing required field accessing 'output.logstash.hosts'
Tek_Chand
(Tek Chand)
January 21, 2019, 11:22am
14
Your 2nd and 3rd line also are also starting from same position as first one.
But it should be look like below:
output.logstash:
hosts: ["elk.promobitech.com:5044"]
bulk_max_size: 1024
Give two spacebar space in 2nd and 3rd line. I have tested it at my end.
Its should fix your issue.
Thanks.
Tek_Chand
(Tek Chand)
January 21, 2019, 11:26am
16
@kvaga ,
kvaga:
yes ! filebeat started !
Glad to hear that your problem is fixed. Please mark it as solution if it resolve your issue.
Thanks.
system
(system)
Closed
February 18, 2019, 11:26am
17
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.