Filebeat cannot able to index into elasticsearch

Im using x pack 5.1.1

filebeat version 5.1.1

May i know which config file i need to share please?

Please find the below filebeat config file

filebeat.prospectors:

  • input_type: log
    paths:
    • /var/log/*.log
    • D:\Team\logs*.log
      document_type: log
      output.elasticsearch:

    Array of hosts to connect to.

    hosts: ["bngwidap107.aonnet.aon.net:9200"]

logging.level: debug
logging.selectors: ["*"]

please let us know if anything needed.

Hi steffens, Please let us know what misght be causing the error.?

please can someone help me in this issue, we are awaiting for your response.

Hi steffen, Let us know what is the issue causing for us.

Please be patient as this forum is manned by volunteers. As you have secured your cluster with X-Pack, you will need to configure Beats to take this into account as well.

Hi Christian,

Yes we have configured with beats security since then same issue we are facing.

For now i have made xpack.security.enabled: false in elasticsearch config and tried to index filebeat into elasticsearch, still filebeat not indexing.

Please find filebeat log below where no error appersa i seems to be.

2017-03-03T18:18:55+05:30 DBG Prospector states cleaned up. Before: 18, After: 18
2017-03-03T18:18:56+05:30 DBG Flushing spooler because of timeout. Events flushed: 0
2017-03-03T18:19:01+05:30 DBG Flushing spooler because of timeout. Events flushed: 0
2017-03-03T18:19:05+05:30 DBG Run prospector
2017-03-03T18:19:05+05:30 DBG Start next scan
2017-03-03T18:19:05+05:30 DBG Check file for harvesting: D:\Team\logs\SystemOut_17.03.03_12.49.43.log
2017-03-03T18:19:05+05:30 DBG Update existing file for harvesting: D:\Team\logs\SystemOut_17.03.03_12.49.43.log, offset: 903525
2017-03-03T18:19:05+05:30 DBG File didn't change: D:\Team\logs\SystemOut_17.03.03_12.49.43.log
2017-03-03T18:19:05+05:30 DBG Check file for harvesting: D:\Team\logs\SystemOut_17.03.03_13.05.33.log
2017-03-03T18:19:05+05:30 DBG Update existing file for harvesting: D:\Team\logs\SystemOut_17.03.03_13.05.33.log, offset: 1045061
2017-03-03T18:19:05+05:30 DBG File didn't change: D:\Team\logs\SystemOut_17.03.03_13.05.33.log
2017-03-03T18:19:05+05:30 DBG Check file for harvesting: D:\Team\logs\SystemOut_17.03.03_13.16.09.log
2017-03-03T18:19:05+05:30 DBG Update existing file for harvesting: D:\Team\logs\SystemOut_17.03.03_13.16.09.log, offset: 931302
2017-03-03T18:19:05+05:30 DBG File didn't change: D:\Team\logs\SystemOut_17.03.03_13.16.09.log
2017-03-03T18:19:05+05:30 DBG Check file for harvesting: D:\Team\logs\SystemOut_17.03.03_13.47.27.log
2017-03-03T18:19:05+05:30 DBG Update existing file for harvesting: D:\Team\logs\SystemOut_17.03.03_13.47.27.log, offset: 1039071
2017-03-03T18:19:05+05:30 DBG File didn't change: D:\Team\logs\SystemOut_17.03.03_13.47.27.log

also please find elasticsearch log below for reference

[2017-03-03T17:54:20,639][ERROR][o.e.x.m.AgentService ] [bngwidap107.aonnet.aon.net] exception when exporting documents
org.elasticsearch.xpack.monitoring.exporter.ExportException: failed to flush export bulks
at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.doFlush(ExportBulk.java:148) ~[x-pack-5.1.1.jar:5.1.1]
at org.elasticsearch.xpack.monitoring.exporter.ExportBulk.close(ExportBulk.java:77) ~[x-pack-5.1.1.jar:5.1.1]
at org.elasticsearch.xpack.monitoring.exporter.Exporters.export(Exporters.java:194) ~[x-pack-5.1.1.jar:5.1.1]
at org.elasticsearch.xpack.monitoring.AgentService$ExportingWorker.run(AgentService.java:208) [x-pack-5.1.1.jar:5.1.1]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]
Caused by: org.elasticsearch.xpack.monitoring.exporter.ExportException: failed to flush export bulk [default_local]
at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.doFlush(LocalBulk.java:114) ~[?:?]
at org.elasticsearch.xpack.monitoring.exporter.ExportBulk.flush(ExportBulk.java:62) ~[?:?]
at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.doFlush(ExportBulk.java:145) ~[?:?]
... 4 more
Caused by: org.elasticsearch.xpack.monitoring.exporter.ExportException: bulk [default_local] reports failures when exporting documents
at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.throwExportException(LocalBulk.java:121) ~[?:?]
at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.doFlush(LocalBulk.java:111) ~[?:?]
at org.elasticsearch.xpack.monitoring.exporter.ExportBulk.flush(ExportBulk.java:62) ~[?:?]
at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.doFlush(ExportBulk.java:145) ~[?:?]
... 4 more
[2017-03-03T17:54:20,658][INFO ][o.e.c.m.MetaDataMappingService] [bngwidap107.aonnet.aon.net] [winlogbeat-2017.02.10/FgUngVG-Q0C-HeRUh19QBQ] update_mapping [wineventlog]
[2017-03-03T17:54:21,921][INFO ][o.e.c.m.MetaDataMappingService] [bngwidap107.aonnet.aon.net] [winlogbeat-2017.02.10/FgUngVG-Q0C-HeRUh19QBQ] update_mapping [wineventlog]
[2017-03-03T17:54:21,926][INFO ][o.e.c.m.MetaDataMappingService] [bngwidap107.aonnet.aon.net] [winlogbeat-2017.02.10/FgUngVG-Q0C-HeRUh19QBQ] update_mapping [wineventlog]
[2017-03-03T17:54:24,334][INFO ][o.e.c.m.MetaDataMappingService] [bngwidap107.aonnet.aon.net] [winlogbeat-2017.02.10/FgUngVG-Q0C-HeRUh19QBQ] update_mapping [wineventlog]
[2017-03-03T17:54:44,172][ERROR][o.e.x.m.c.c.ClusterStateCollector] [bngwidap107.aonnet.aon.net] collector [cluster-state-collector] timed out when collecting data
[2017-03-03T17:54:45,479][INFO ][o.e.c.r.a.AllocationService] [bngwidap107.aonnet.aon.net] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
[2017-03-03T17:54:45,636][INFO ][o.e.c.m.MetaDataMappingService] [bngwidap107.aonnet.aon.net] [winlogbeat-2017.02.10/FgUngVG-Q0C-HeRUh19QBQ] update_mapping [wineventlog]

Please format logs and config files with </> button.

You config is kind of incomplete... by redacting username/password I didn't mean to drop them.

Plus, how did you create your user for writing to ES?

No idea if/how logs from ES are related to your problem.

when posting logs, read them first. The filebeat log says nothing about faild send-attempts. But files not being updated... did you change any logs? Have you tried to delete the registry file?

I adapted the configuration from Christians link a little to create me a beat_user for writing to filebeat-*, metricbeat-* and packetbeat-* index (if you did just copy the samples as is, you would have no credentials for filebeat):

POST _xpack/security/role/beat_writer
{
  "cluster": ["manage_index_templates", "monitor"],
  "indices": [
    {
      "names": [ "filebeat-*", "metricbeat-*", "packetbeat-*" ], 
      "privileges": ["read","write","create_index"]
    }
  ]
}

POST /_xpack/security/user/beat_user
{
  "password" : "changeme",
  "roles" : [ "beat_writer"],
  "full_name" : "Internal Beat User"
}

And the beats output configuration:

output.elasticsearch:
  hosts: ["localhost:9200"]
  username: "beat_user"
  password: "changeme"

Hi Steffens,

Thanks for the above input.

Please can you let me know how to format logs and config files with </> button.

also i have configured elastic user in filebeat output.elaticsearch. please let me is that the proper configuration which user writing to ES.?

As you mentioned samples above, i have created an beat_writer role and beat_user, still i couldnt able to index filebeat into ES.

Please do help.

Hi steffens,

continously im getting the below error in filebeat logs.

2017-03-09T18:43:24+05:30 DBG send completed
2017-03-09T18:43:24+05:30 DBG output worker: publish 50 events
2017-03-09T18:43:24+05:30 DBG PublishEvents: 50 events have been published to elasticsearch in 1.9989ms.
2017-03-09T18:43:24+05:30 WARN Can not index event (status=404): {"type":"index_not_found_exception","reason":"no such index","resource.type":"index_expression","resource.id":"filebeat-2017.03.09","index_uuid":"na","index":"filebeat-2017.03.09"}
2017-03-09T18:43:24+05:30 WARN Can not index event (status=404): {"type":"index_not_found_exception","reason":"no such index","resource.type":"index_expression","resource.id":"filebeat-2017.03.09","index_uuid":"na","index":"filebeat-2017.03.09"}
2017-03-09T18:43:24+05:30 WARN Can not index event (status=404): {"type":"index_not_found_exception","reason":"no such index","resource.type":"index_expression","resource.id":"filebeat-2017.03.09","index_uuid":"na","index":"filebeat-2017.03.09"}

please let us know how do we index filebeat into elasticsearch.?

Could you quickly try to ingest with the admin user to see if it is an user access issue or not?

Ruflin, May i know what admin user you are talking about?

There should be at least one admin user in x-pack that has all access rights. That is the one I'm referring to. You should definitively not use this one in production but would be nice to quickly check with this user for testing.

Default admin user should be elastic.

Have you tried to index any data via curl?

Did you check /_cat/indices if the index exists?

Have you checked Elasticsearch logs for authentication failures or failures when creating the filebeat index?

yes, i can see there exists indices and i dont see any failures in elasticsearch logs.

Just wanted to know how does this indices happen and where do the data or all shards located?

Not sure I fully understand your question above? You mean how beats creates the indices?

Yes Ruflin, i need to know how beats creates the indices.

Also we need to know if logs path changed in filebeat configuration, how long the data will be indexed and can be able to view in kibana?

Indices are created based on the index config in the output: https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html#_index

I think I can't follow your second question(s)? How long you keep the data is up-to-you.

I strongly recommend you to follow the getting started guide as it will show you all the steps and details: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html

Its resolved. Thank you ruflin :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.