Filebeat can not publish events to Elastic Search

Hi All,

I am a newbie in ELK stack and working on a POC. I am trying to ingest some log data using File beats from a RHEL machine to elastic search on a remote windows machine. I keep seeing an error message saying that it cannot publish events to ElasticSearch. Any help is greatly appreciated.

Below is a error message

"single.go:140: ERR Connecting error publishing events (retrying): Get http://10.10.6.180:9200: net/http: request canceled (Client.Timeout exceeded while awaiting headers)"

yml config as below

input_type: log
paths:
- /opt/IBM/tivoli/netcool/omnibus/log/*.log

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:

Array of hosts to connect to.

hosts: ["10.10.6.180:9200"]

Thank you,
Sam

Have you tried to increase the HTTP timeout in output.elasticsearch?

Hi Steffen,

Thank you for your response. Can you throw some light on to where this configuration has to be done. I do not see any config file with the entry you mentioned below.

Thanks,
Sam

See https://www.elastic.co/guide/en/beats/filebeat/5.2/elasticsearch-output.html#_timeout

Thanks for the response. Can you please review below and advise.

I see the same error again. Here is the yml config file where I did the modification and below is the log entries for your reference.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:

Array of hosts to connect to.

hosts: ["10.10.6.180:9200"]

Optional protocol and basic auth credentials.

#protocol: "https"
#username: "elastic"
#password: "changeme"
timeout: 180

2017/01/28 16:59:44.182193 prospector_log.go:245: DBG Update existing file for harvesting: /opt/IBM/tivoli/netcool/omnibus/log/SPI.log, offset: 565308
2017/01/28 16:59:44.182234 prospector_log.go:297: DBG Harvester for file is still running: /opt/IBM/tivoli/netcool/omnibus/log/SPI.log
2017/01/28 16:59:44.182256 prospector_log.go:83: DBG Prospector states cleaned up. Before: 1, After: 1
2017/01/28 16:59:45.596578 client.go:632: DBG Ping request failed with: Get http://10.10.6.180:9200: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2017/01/28 16:59:45.596633 single.go:140: ERR Connecting error publishing events (retrying): Get http://10.10.6.180:9200: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2017/01/28 16:59:45.596645 single.go:156: DBG send fail
2017/01/28 16:59:47.596967 client.go:627: DBG ES Ping(url=http://10.10.6.180:9200, timeout=3m0s)
2017/01/28 16:59:54.182469 prospector.go:155: DBG Run prospector
2017/01/28 16:59:54.182515 prospector_log.go:62: DBG Start next scan
2017/01/28 16:59:54.182620 prospector_log.go:212: DBG Check file for harvesting: /opt/IBM/tivoli/netcool/omnibus/log/SPI.log
2017/01/28 16:59:54.182639 prospector_log.go:245: DBG Update existing file for harvesting: /opt/IBM/tivoli/netcool/omnibus/log/SPI.log, offset: 565308

I see the same error message after it tried multiple times.

have you checked elasticsearch operating correctly?

Yes, another file beat is sending the data to Elastic search, the only difference between the two filebeats is that the one which is working fine is running on the localhost where Elastic Search is running and it is a windows box.

The issue reported below is on a linux machine. I suspect some connection issue between ES and filebeat, but I see the connection is getting established from the linux to windows machine.

Filebeat host to ES host connection status.

[cid:image002.png@01D27C66.B98F4B50]

well, it's not a connection issue. It's a timeout issue with filebeat waiting for a response from Elasticsearch. That is, the request has already been send, which is only possible if filebeat can connect. You using multiline or do you have some particular big events send to elasticsearch? Try to set output.elasticsearch.bulk_max_size: 2, I wonder if we still get the timeout in this case.

You can also try to capture the http request via tcpdump and check if a response is send (do so from both machines to verify the response not being dropped on network level).

That’s a good idea, let me check that and get back to you.

I performed the below change on the .yml file and I am seeing the packets being sent from filebeats to ES. And seeing the active connection on ES host. Below are the screen shots respectively.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:

Array of hosts to connect to.

hosts: ["10.10.6.180:9200"]

Optional protocol and basic auth credentials.

#protocol: "https"
#username: "elastic"
#password: "changeme"
timeout: 180
bulk_max_size: 2

[cid:image001.png@01D27D73.27B7E580]

[cid:image004.png@01D27D76.DC301FC0]

Regards,
Shyam Sunka
Sr. Systems Engineer| Vicom Computer Services, Inc.
400 Broadhollow Road, Farmingdale NY 11735
Phone: 315-351-0471
Email: ssunka@vicomnet.commailto:ssunka@vicomnet.com
Web: www.vicomnet.comhttp://www.vicomnet.com/

[cid:70C035B7-7622-4FC7-ABBC-E83516065401]
Vicom Professional Services Catalog
See our services offerings here:
www.vicomnet.com/pscataloghttp://www.vicomnet.com/pscatalog

Also, I have noticed the elastic search port is only on loopback connectivity. Can we set it to accept connections from remote
machines?

Thanks.

have you set network.host in elasticsearch.yml? Btw. in case of ES being accessible from outside (or in general), please don't have an unprotected ES instance running.

Changing the network.host to 0.0.0.0 in elasticsearch.yml and restarting ES did the trick. Filebeat log says something like below, which is a successful publishing of events to ES.

2017/02/02 17:09:08.626250 single.go:150: DBG send completed
2017/02/02 17:09:08.626267 output.go:109: DBG output worker: publish 50 events
2017/02/02 17:09:08.642381 client.go:250: DBG PublishEvents: 50 events have been published to elasticsearch in 16.075924ms.

Now, I have another issue that I am not able search the same data on Kibana Search UI. Am I missing anything else here?

have you checked indexes being available? URL http://es_host:9200/_cat/indices?pretty? Have you checked kibana using/having the right index pattern? Any errors on kibana side?

I am not sure how this really works. Do you have some kind of check points for a successful log integration cycle?

Hi Steffen,

I see lot of entries similar to below at the URL you shared. And I think filebeat entries are present which I am interested in. Can you please help in creating the indexes for my custom log file data? Thank you for your help.

yellow open packetbeat-2017.01.20 5 1 777 0 599.1kb 599.1kb
yellow open packetbeat-2017.01.21 5 1 1550 0 1015.1kb 1015.1kb
yellow open .kibana 1 1 7 0 37.8kb 37.8kb
yellow open filebeat-2017.02.03 5 1 514899 0 109.6mb 109.6mb
yellow open filebeat-2017.02.02 5 1 220557 0 42.4mb 42.4mb
yellow open winlogbeat-2016.11.19 5 1 574 0 511kb 511kb
yellow open winlogbeat-2016.11.18 5 1 64 0 95.7kb 95.7kb

In kibana you need to condigure (and select an active) index pattern: https://www.elastic.co/guide/en/kibana/current/index-patterns.html

The first time you start kibana it will ask for an index pattern. In case you already have one, you have to configure one in 'Management'.

This topic was automatically closed after 21 days. New replies are no longer allowed.