please find the error below, ES ping not happening. Please suggest
2017-03-01T13:14:44+05:30 DBG ES Ping(url=http://10.209.68.81:9201, timeout=1m30s)
2017-03-01T13:14:45+05:30 DBG Ping request failed with: Get http://10.209.68.81:9201: dial tcp 10.209.68.81:9201: connectex: No connection could be made because the target machine actively refused it.
2017-03-01T13:14:45+05:30 ERR Connecting error publishing events (retrying): Get http://10.209.68.81:9201: dial tcp 10.209.68.81:9201: connectex: No connection could be made because the target machine actively refused it.
2017-03-01T13:14:45+05:30 DBG send fail
2017-03-01T13:14:46+05:30 DBG Flushing spooler because of timeout. Events flushed: 0
are you using xpack or some proxy asking for authentication? Can you share your config (redact the password please)? Which filebeat version are you using?
Please be patient as this forum is manned by volunteers. As you have secured your cluster with X-Pack, you will need to configure Beats to take this into account as well.
Yes we have configured with beats security since then same issue we are facing.
For now i have made xpack.security.enabled: false in elasticsearch config and tried to index filebeat into elasticsearch, still filebeat not indexing.
Please find filebeat log below where no error appersa i seems to be.
2017-03-03T18:18:55+05:30 DBG Prospector states cleaned up. Before: 18, After: 18
2017-03-03T18:18:56+05:30 DBG Flushing spooler because of timeout. Events flushed: 0
2017-03-03T18:19:01+05:30 DBG Flushing spooler because of timeout. Events flushed: 0
2017-03-03T18:19:05+05:30 DBG Run prospector
2017-03-03T18:19:05+05:30 DBG Start next scan
2017-03-03T18:19:05+05:30 DBG Check file for harvesting: D:\Team\logs\SystemOut_17.03.03_12.49.43.log
2017-03-03T18:19:05+05:30 DBG Update existing file for harvesting: D:\Team\logs\SystemOut_17.03.03_12.49.43.log, offset: 903525
2017-03-03T18:19:05+05:30 DBG File didn't change: D:\Team\logs\SystemOut_17.03.03_12.49.43.log
2017-03-03T18:19:05+05:30 DBG Check file for harvesting: D:\Team\logs\SystemOut_17.03.03_13.05.33.log
2017-03-03T18:19:05+05:30 DBG Update existing file for harvesting: D:\Team\logs\SystemOut_17.03.03_13.05.33.log, offset: 1045061
2017-03-03T18:19:05+05:30 DBG File didn't change: D:\Team\logs\SystemOut_17.03.03_13.05.33.log
2017-03-03T18:19:05+05:30 DBG Check file for harvesting: D:\Team\logs\SystemOut_17.03.03_13.16.09.log
2017-03-03T18:19:05+05:30 DBG Update existing file for harvesting: D:\Team\logs\SystemOut_17.03.03_13.16.09.log, offset: 931302
2017-03-03T18:19:05+05:30 DBG File didn't change: D:\Team\logs\SystemOut_17.03.03_13.16.09.log
2017-03-03T18:19:05+05:30 DBG Check file for harvesting: D:\Team\logs\SystemOut_17.03.03_13.47.27.log
2017-03-03T18:19:05+05:30 DBG Update existing file for harvesting: D:\Team\logs\SystemOut_17.03.03_13.47.27.log, offset: 1039071
2017-03-03T18:19:05+05:30 DBG File didn't change: D:\Team\logs\SystemOut_17.03.03_13.47.27.log
also please find elasticsearch log below for reference
[2017-03-03T17:54:20,639][ERROR][o.e.x.m.AgentService ] [bngwidap107.aonnet.aon.net] exception when exporting documents
org.elasticsearch.xpack.monitoring.exporter.ExportException: failed to flush export bulks
at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.doFlush(ExportBulk.java:148) ~[x-pack-5.1.1.jar:5.1.1]
at org.elasticsearch.xpack.monitoring.exporter.ExportBulk.close(ExportBulk.java:77) ~[x-pack-5.1.1.jar:5.1.1]
at org.elasticsearch.xpack.monitoring.exporter.Exporters.export(Exporters.java:194) ~[x-pack-5.1.1.jar:5.1.1]
at org.elasticsearch.xpack.monitoring.AgentService$ExportingWorker.run(AgentService.java:208) [x-pack-5.1.1.jar:5.1.1]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]
Caused by: org.elasticsearch.xpack.monitoring.exporter.ExportException: failed to flush export bulk [default_local]
at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.doFlush(LocalBulk.java:114) ~[?:?]
at org.elasticsearch.xpack.monitoring.exporter.ExportBulk.flush(ExportBulk.java:62) ~[?:?]
at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.doFlush(ExportBulk.java:145) ~[?:?]
... 4 more
Caused by: org.elasticsearch.xpack.monitoring.exporter.ExportException: bulk [default_local] reports failures when exporting documents
at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:121) ~[?:?]
at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.doFlush(LocalBulk.java:111) ~[?:?]
at org.elasticsearch.xpack.monitoring.exporter.ExportBulk.flush(ExportBulk.java:62) ~[?:?]
at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.doFlush(ExportBulk.java:145) ~[?:?]
... 4 more
[2017-03-03T17:54:20,658][INFO ][o.e.c.m.MetaDataMappingService] [bngwidap107.aonnet.aon.net] [winlogbeat-2017.02.10/FgUngVG-Q0C-HeRUh19QBQ] update_mapping [wineventlog]
[2017-03-03T17:54:21,921][INFO ][o.e.c.m.MetaDataMappingService] [bngwidap107.aonnet.aon.net] [winlogbeat-2017.02.10/FgUngVG-Q0C-HeRUh19QBQ] update_mapping [wineventlog]
[2017-03-03T17:54:21,926][INFO ][o.e.c.m.MetaDataMappingService] [bngwidap107.aonnet.aon.net] [winlogbeat-2017.02.10/FgUngVG-Q0C-HeRUh19QBQ] update_mapping [wineventlog]
[2017-03-03T17:54:24,334][INFO ][o.e.c.m.MetaDataMappingService] [bngwidap107.aonnet.aon.net] [winlogbeat-2017.02.10/FgUngVG-Q0C-HeRUh19QBQ] update_mapping [wineventlog]
[2017-03-03T17:54:44,172][ERROR][o.e.x.m.c.c.ClusterStateCollector] [bngwidap107.aonnet.aon.net] collector [cluster-state-collector] timed out when collecting data
[2017-03-03T17:54:45,479][INFO ][o.e.c.r.a.AllocationService] [bngwidap107.aonnet.aon.net] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
[2017-03-03T17:54:45,636][INFO ][o.e.c.m.MetaDataMappingService] [bngwidap107.aonnet.aon.net] [winlogbeat-2017.02.10/FgUngVG-Q0C-HeRUh19QBQ] update_mapping [wineventlog]
Please format logs and config files with </> button.
You config is kind of incomplete... by redacting username/password I didn't mean to drop them.
Plus, how did you create your user for writing to ES?
No idea if/how logs from ES are related to your problem.
when posting logs, read them first. The filebeat log says nothing about faild send-attempts. But files not being updated... did you change any logs? Have you tried to delete the registry file?
I adapted the configuration from Christians link a little to create me a beat_user for writing to filebeat-*, metricbeat-* and packetbeat-* index (if you did just copy the samples as is, you would have no credentials for filebeat):
continously im getting the below error in filebeat logs.
2017-03-09T18:43:24+05:30 DBG send completed
2017-03-09T18:43:24+05:30 DBG output worker: publish 50 events
2017-03-09T18:43:24+05:30 DBG PublishEvents: 50 events have been published to elasticsearch in 1.9989ms.
2017-03-09T18:43:24+05:30 WARN Can not index event (status=404): {"type":"index_not_found_exception","reason":"no such index","resource.type":"index_expression","resource.id":"filebeat-2017.03.09","index_uuid":"na","index":"filebeat-2017.03.09"}
2017-03-09T18:43:24+05:30 WARN Can not index event (status=404): {"type":"index_not_found_exception","reason":"no such index","resource.type":"index_expression","resource.id":"filebeat-2017.03.09","index_uuid":"na","index":"filebeat-2017.03.09"}
2017-03-09T18:43:24+05:30 WARN Can not index event (status=404): {"type":"index_not_found_exception","reason":"no such index","resource.type":"index_expression","resource.id":"filebeat-2017.03.09","index_uuid":"na","index":"filebeat-2017.03.09"}
please let us know how do we index filebeat into elasticsearch.?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.