Filbeat 6.3.1 not sending logs to stack 6.3.1

I recently updated out Elastic stack to version 6.3.1. Last night, I updated filebeat on most hosts. From the time of update to now, log data are no longer being received. However, from a small number of hosts still running filebeat 6.2.4, the logs are being received. The config file for both 6.3.1 and 6.2.4 is identical. Help would be appreciated fixing this.

An update - as soons as I rolled filebeat back to 6.2.4, on one server, I began to receive logs. But, of course, that's not a solution.

Could you share your filebeat logs and config? And do you use Logstash?


I do use logstash. Filebeat config is here: . The log file is here: .

Can you also share your LS config? How do you load the Filebeat template?

In the log file the following two lines show up pretty frequently:

|2018-07-16T14:00:54.204-0400|ERROR|logstash/async.go:235|Failed to publish events caused by: write tcp> write: connection reset by peer|
|2018-07-16T14:00:55.204-0400|ERROR|pipeline/output.go:92|Failed to publish events: write tcp> write: connection reset by peer|

Do have something like a proxy between FB and LS?


Thanks for working on this with me.

As things would have it, when I looked in Kibana this morning, the logs sent by filebeat 6.3.1 servers are now being received. Huh? OK, great. But whether it's due to something I did (doubtful), or just not being patient (probable), I don't know.

However, that you found something wonky in the log file is something I'd like to pursue, if you don't mind. I put the LS configs (I've split them out into input, output, and filters) here: .

Admittedly, despite my efforts, I've never been good at configuring the stack. As to filebeat, I believe I generated filebeat.template.json on one of the filebeat servers, then applied it by running this (quotes added for clarity): "curl -XPUT -H 'Content-Type: application/json' -d@filebeat.template.json". That's probably completely wrong, but seems to work, in part anyway. However, I've never been able to get geoip to work, despite considerable hacking, for instance.

Anyway, I would appreciate it if you could take a look, and avise.

Oh, and there is no proxy between FB and LS.


The error above indicates that sometimes the connection to LS is flaky. It could be the network but also LS being unresponsive. How many LS instance you have? Could it be that they are overloaded?

I have just one LS instance. I monitor it, and it doesn't seem overloaded. But, I'll take another look.

Do my configs seem right, and did I load the Filebeat template properly? It puzzles me as to why Filebeat-based visualizations don't work (e.g. geoip). I note that there are just a small handful of "available fields" for Filebeat in Kbana. I've attached a picture of those fields (I think).

I would also expect a few more field. How did you load the template and index pattern?

Well, therein lies the problem. I'm not exactly sure how to load them. As I mentioned above, I created "filebeat.template.json" on a filebeat box, then applied it by running this "curl -XPUT -H 'Content-Type: application/json' -d@filebeat.template.json". I did that once. It's embarrassing to ask, but does that take care of loading the template? As to loading the index pattern, I'm not sure how to do that either. I know there's plenty of documentation out there, but I've just never gotten the hang of how to do this. I know that asking for a tutorial of sorts from you is probably a pita, but I would certainly appreciate it. That way, I'll know how to make filebeat work as intended.

With continued thanks.

Any chance to try the setup without Logstash? Because in this case you could just run filebeat and it would automatically setup the template and indices for you.

I wanted to check your LS config again to see what you do with LS, unfortunately the page is removed.


I've reposted the LS config(s) here:

Thanks for taking a look.

Thanks for sharing it again. Is see your LS config is rather complex so LS is here to stay :slight_smile:

There are 2 parts: index template for elasticsearch and index pattern for Kibana. Both can be loaded by the beat directly if pointed to ES / KB directly. The screenshot you posted above is from the index pattern in Kibana.

One thing I just realised now is that the command you post above is for loading the template for 6.2.4. Each version of Beats has it's own template and indices. So if you want to template for 6.3.1 you must also load it (before indices exist).

Thank you.

Here's what I did before replying:

  1. generated new template - filebeat export template > /home/dyioulos/filebeat.template.json
  2. installed new template - curl -XPUT -H 'Content-Type: application/json' -d@filebeat.template.json
  3. cleared out old indices - curl -XDELETE '*'
  4. restarted the stack
  5. refreshed filebeat-* field list through Kibana Management page.

I still must be missing something, because I see the same filebeat fields in Kibana:

Can anyone spot anything I'm doing wrong?

If you connect Filebeat directly to Elasticsearch you can let filebeat load the index pattern with filebeat setup.

What do the documents look like that you have in the 6.3.1 indices? This thread started with not being able to send data, do you see data now?

I do see the data now, and thank you for your help there. Apologies if I should have started a new thread regarding the index pattern. I hope it's OK if I just finish this out.

What is the full "filebeat setup" command that I should run? And, I run that from a host on which filebeat is installed, correct?

Again, many thanks.

I recently had some issues after upgrading some Filebeat agents from 6.3.0 to 6.3.1. After reading through this thread and seeing your pointer to re-running the setup action I did just that and it worked.

I was assuming changes to the micro version (i.e. jumping from .0 to .1) won't break anything. Is that correct or should I re-import both the index template and index pattern every time the version number increases?

Here's what filebeat setup does (see yourself with filebeat help setup):

This command does initial setup of the environment:

 * Index mapping template in Elasticsearch to ensure fields are mapped.
 * Kibana dashboards (where available).
 * ML jobs (where available).
 * Ingest pipelines (where available).

There are a few flags you can set (again, check the help command) to e.g. set up the index template only.

You can run this command either from a host already configured or from your local machine, e.g. using a Docker image. The agent doesn't need the full configuration because you can override the configuration at runtime. Have a look at the docs on how to load the template manually.

When I run "curl ''", this is what I get:

name index_patterns order version
.ml-anomalies- [.ml-anomalies-] 0 6030199
.ml-meta [.ml-meta] 0 6030199
.ml-notifications [.ml-notifications] 0 6030199
.ml-state [.ml-state] 0 6030199
.monitoring-alerts [.monitoring-alerts-6] 0 6020099
.monitoring-beats [.monitoring-beats-6-
] 0 6020099
.monitoring-es [.monitoring-es-6-] 0 6020099
.monitoring-kibana [.monitoring-kibana-6-
] 0 6020099
.monitoring-logstash [.monitoring-logstash-6-] 0 6020099
.triggered_watches [.triggered_watches
] 2147483647
.watch-history-7 [.watcher-history-7*] 2147483647
.watches [.watches*] 2147483647
filebeat-6.2.2 [filebeat-6.2.0-*] 1
filebeat-6.2.4 [filebeat-6.2.0-*] 1
filebeat-6.3.1 [filebeat-6.3.1-*] 1
kibana_index_template:.kibana [.kibana] 0
logstash [logstash-] 0 60001
logstash-index-template [.logstash] 0
security-index-template [.security-
] 1000
security_audit_log [.security_audit_log*] 1000

Too many filebeat templates?