I am a newbie in ELK stack and working on a POC. I am trying to ingest some log data using File beats from a RHEL machine to elastic search on a remote windows machine. I keep seeing an error message saying that it cannot publish events to ElasticSearch. Any help is greatly appreciated.
Below is a error message
"single.go:140: ERR Connecting error publishing events (retrying): Get http://10.10.6.180:9200: net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Thank you for your response. Can you throw some light on to where this configuration has to be done. I do not see any config file with the entry you mentioned below.
Yes, another file beat is sending the data to Elastic search, the only difference between the two filebeats is that the one which is working fine is running on the localhost where Elastic Search is running and it is a windows box.
The issue reported below is on a linux machine. I suspect some connection issue between ES and filebeat, but I see the connection is getting established from the linux to windows machine.
well, it's not a connection issue. It's a timeout issue with filebeat waiting for a response from Elasticsearch. That is, the request has already been send, which is only possible if filebeat can connect. You using multiline or do you have some particular big events send to elasticsearch? Try to set output.elasticsearch.bulk_max_size: 2, I wonder if we still get the timeout in this case.
You can also try to capture the http request via tcpdump and check if a response is send (do so from both machines to verify the response not being dropped on network level).
I performed the below change on the .yml file and I am seeing the packets being sent from filebeats to ES. And seeing the active connection on ES host. Below are the screen shots respectively.
have you set network.host in elasticsearch.yml? Btw. in case of ES being accessible from outside (or in general), please don't have an unprotected ES instance running.
Changing the network.host to 0.0.0.0 in elasticsearch.yml and restarting ES did the trick. Filebeat log says something like below, which is a successful publishing of events to ES.
2017/02/02 17:09:08.626250 single.go:150: DBG send completed
2017/02/02 17:09:08.626267 output.go:109: DBG output worker: publish 50 events
2017/02/02 17:09:08.642381 client.go:250: DBG PublishEvents: 50 events have been published to elasticsearch in 16.075924ms.
Now, I have another issue that I am not able search the same data on Kibana Search UI. Am I missing anything else here?
have you checked indexes being available? URL http://es_host:9200/_cat/indices?pretty? Have you checked kibana using/having the right index pattern? Any errors on kibana side?
I see lot of entries similar to below at the URL you shared. And I think filebeat entries are present which I am interested in. Can you please help in creating the indexes for my custom log file data? Thank you for your help.
yellow open packetbeat-2017.01.20 5 1 777 0 599.1kb 599.1kb
yellow open packetbeat-2017.01.21 5 1 1550 0 1015.1kb 1015.1kb
yellow open .kibana 1 1 7 0 37.8kb 37.8kb
yellow open filebeat-2017.02.03 5 1 514899 0 109.6mb 109.6mb
yellow open filebeat-2017.02.02 5 1 220557 0 42.4mb 42.4mb
yellow open winlogbeat-2016.11.19 5 1 574 0 511kb 511kb
yellow open winlogbeat-2016.11.18 5 1 64 0 95.7kb 95.7kb
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.