Instantaneaously failing to publish events

Hi there,

while adding a fifth path into my Filebeat service I came across some problems by not receiving the data in question. Going through the log file I encountered the following lines:

Timestamp Loglevel File Message
2018-11-16T11:00:09.315+0100 INFO log/harvester.go:216 Harvester started for file: C:\PATH\log1.csv
2018-11-16T11:00:09.386+0100 INFO log/harvester.go:216 Harvester started for file: C:\PATH\log2.csv
2018-11-16T11:00:09.758+0100 INFO log/harvester.go:216 Harvester started for file: C:\PATH\log3.csv
2018-11-16T11:00:10.357+0100 ERROR logstash/async.go:235 Failed to publish events caused by: write tcp FILEBEAT_HOST:64985->LOGSTASH_HOST:5044: wsasend: Eine vorhandene Verbindung wurde vom Remotehost geschlossen.
2018-11-16T11:00:10.357+0100 ERROR logstash/async.go:235 Failed to publish events caused by: write tcp FILEBEAT_HOST:64982->LOGSTASH_HOST:5044: wsasend: Eine vorhandene Verbindung wurde vom Remotehost geschlossen.
2018-11-16T11:00:10.357+0100 ERROR logstash/async.go:235 Failed to publish events caused by: write tcp FILEBEAT_HOST:64983->LOGSTASH_HOST:5044: wsasend: Eine vorhandene Verbindung wurde vom Remotehost geschlossen.
2018-11-16T11:00:10.357+0100 ERROR logstash/async.go:235 Failed to publish events caused by: write tcp FILEBEAT_HOST:64984->LOGSTASH_HOST:5044: wsasend: Eine vorhandene Verbindung wurde vom Remotehost geschlossen.
2018-11-16T11:00:11.372+0100 ERROR pipeline/output.go:92 Failed to publish events: write tcp FILEBEAT_HOST:64984->LOGSTASH_HOST:5044: wsasend: Eine vorhandene Verbindung wurde vom Remotehost geschlossen.
2018-11-16T11:00:11.372+0100 ERROR pipeline/output.go:92 Failed to publish events: write tcp FILEBEAT_HOST:64982->LOGSTASH_HOST:5044: wsasend: Eine vorhandene Verbindung wurde vom Remotehost geschlossen.
2018-11-16T11:00:11.372+0100 ERROR pipeline/output.go:92 Failed to publish events: write tcp FILEBEAT_HOST:64983->LOGSTASH_HOST:5044: wsasend: Eine vorhandene Verbindung wurde vom Remotehost geschlossen.
2018-11-16T11:00:11.372+0100 ERROR pipeline/output.go:92 Failed to publish events: write tcp FILEBEAT_HOST:64985->LOGSTASH_HOST:5044: wsasend: Eine vorhandene Verbindung wurde vom Remotehost geschlossen.

As far as I understand my host on which the Logstash instance resides closes the connection instantaneaous.

I already set the client_inactivity_timeout to 600 on my Logstash instance and restarted the service.

Filebeat itself has two prospectors set up:

  • first prospector harvests certain lines out of a continous log file
  • second prospector harvests csv files which are generated between daily and hourly

The filebeat.yml is configured like so:

#=========================== Filebeat prospectors =============================
filebeat.prospectors:
- type: log
  enabled: true
  paths:
    - C:\PATH1\SERVICE.log
  include_lines: ['ELK']
  close_inactive: 12m
- type: log
  enabled: true
  paths:
    - "C:/PATH2/FOLDER1/*.csv"
    - "C:/PATH2/FOLDER2/*.csv"
    - "C:/PATH2/FOLDER3/*.csv"
    - "C:/PATH2/FOLDER4/*.csv"

#============================= Filebeat modules ===============================

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml

  reload.enabled: false

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["LOGSTASH_HOST:5044"]
  loadbalance: true
  worker: 4
#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: info
logging.to_files: true
logging.to_syslos: false
logging.files:
    path: "C:/PATH_TO_LOG"
    name: filebeat.log
    keepfiles: 7

Filebeat is running on version 6.2.0 while the Logstash is still on 6.0.0.

Any help and pointers are appreciated!

Do you also have some logstash logs? I wonder if Logstash or some other intermediary is closing the TCP connection on write.

The error logs are very close in time. For testing can you reduce the number of workers to 1 in the logstash output? so we get a better idea on timings. Also enable debug logging if possible.

Was there a long pause between sends before this occured?

I checked the logstash logs beforehand and there was no sign of a connection being closed by logstash itself.

There is around 10 minutes of pause between sends since this is the smallest interval in which relevant data arrives in our continous log.
As mentioned in my first post the files from the second prospector are generated daily or hourly meaning pause between collections range from 1 to 24 hours.

I reduced the number of workers to 1 and tried debug logging and these are the "juiciest" bits I found after having it running for some time and skimming through these. I have since then turnt debug logging off again.

Excerpt 1

** Writing to registry **
2018-11-19T07:05:20.671+0100	DEBUG	[harvester]	log/log.go:85	End of file reached: C:\LOG_PATH\CONTINOUS_LOG.log; Backoff now.
2018-11-19T07:05:21.187+0100	INFO	log/harvester.go:241	File is inactive: C:\LOG_PATH2\SUBFOLDER1\LOG1.csv. Closing because close_inactive of 5m0s reached.
2018-11-19T07:05:21.187+0100	DEBUG	[harvester]	log/harvester.go:468	Stopping harvester for file: C:\LOG_PATH2\SUBFOLDER1\LOG1.csv
2018-11-19T07:05:21.187+0100	DEBUG	[harvester]	log/harvester.go:478	Closing file: C:\LOG_PATH2\SUBFOLDER1\LOG1.csv
2018-11-19T07:05:21.187+0100	DEBUG	[harvester]	log/harvester.go:348	Update state: C:\LOG_PATH2\SUBFOLDER1\LOG1.csv, offset: 86
2018-11-19T07:05:21.187+0100	DEBUG	[harvester]	log/harvester.go:489	harvester cleanup finished for file: C:\LOG_PATH2\SUBFOLDER1\LOG1.csv
** Writing to registry **
2018-11-19T07:05:21.375+0100	INFO	log/harvester.go:241	File is inactive: C:\LOG_PATH2\SUBFOLDER2\LOG2.csv. Closing because close_inactive of 5m0s reached.
2018-11-19T07:05:21.375+0100	DEBUG	[harvester]	log/harvester.go:468	Stopping harvester for file: C:\LOG_PATH2\SUBFOLDER2\LOG2.csv
2018-11-19T07:05:21.375+0100	DEBUG	[harvester]	log/harvester.go:478	Closing file: C:\LOG_PATH2\SUBFOLDER2\LOG2.csv
2018-11-19T07:05:21.375+0100	DEBUG	[harvester]	log/harvester.go:348	Update state: C:\LOG_PATH2\SUBFOLDER2\LOG2.csv, offset: 0
2018-11-19T07:05:21.375+0100	DEBUG	[harvester]	log/harvester.go:489	harvester cleanup finished for file: C:\LOG_PATH2\SUBFOLDER2\LOG2.csv
** Writing to registry **
2018-11-19T07:05:22.267+0100	INFO	log/harvester.go:241	File is inactive: C:\LOG_PATH2\SUBFOLDER3\LOG3.csv. Closing because close_inactive of 5m0s reached.
2018-11-19T07:05:22.267+0100	DEBUG	[harvester]	log/harvester.go:468	Stopping harvester for file: C:\LOG_PATH2\SUBFOLDER3\LOG3.csv
2018-11-19T07:05:22.267+0100	DEBUG	[harvester]	log/harvester.go:478	Closing file: C:\LOG_PATH2\SUBFOLDER3\LOG3.csv
2018-11-19T07:05:22.267+0100	DEBUG	[harvester]	log/harvester.go:348	Update state: C:\LOG_PATH2\SUBFOLDER3\LOG3.csv, offset: 0
2018-11-19T07:05:22.267+0100	DEBUG	[harvester]	log/harvester.go:489	harvester cleanup finished for file: C:\LOG_PATH2\SUBFOLDER3\LOG3.csv
** Writing to registry **
2018-11-19T07:05:22.673+0100	DEBUG	[harvester]	log/log.go:85	End of file reached: C:\LOG_PATH\CONTINOUS_LOG.log; Backoff now.
2018-11-19T07:05:26.683+0100	DEBUG	[harvester]	log/log.go:85	End of file reached: C:\LOG_PATH\CONTINOUS_LOG.log; Backoff now.
2018-11-19T07:05:28.120+0100	DEBUG	[prospector]	prospector/prospector.go:124	Run prospector
2018-11-19T07:05:28.120+0100	DEBUG	[prospector]	log/prospector.go:147	Start next scan
2018-11-19T07:05:28.120+0100	DEBUG	[prospector]	log/prospector.go:361	Check file for harvesting: C:\LOG_PATH\CONTINOUS_LOG.log
2018-11-19T07:05:28.120+0100	DEBUG	[prospector]	log/prospector.go:447	Update existing file for harvesting: C:\LOG_PATH\CONTINOUS_LOG.log, offset: 1099117
2018-11-19T07:05:28.120+0100	DEBUG	[prospector]	log/prospector.go:499	Harvester for file is still running: C:\LOG_PATH\CONTINOUS_LOG.log
2018-11-19T07:05:28.120+0100	DEBUG	[prospector]	log/prospector.go:168	Prospector states cleaned up. Before: 1, After: 1

I don't see anything out of the ordinary here. The service that's running our log generation may create empty files if nothing is found.

Excerpt 2

2018-11-19T07:15:20.126+0100	DEBUG	[publish]	pipeline/processor.go:275	Publish event: {
** PUBLISHED EVENT **
}
** DROPPED LINES **
2018-11-19T07:15:20.126+0100	DEBUG	[harvester]	log/log.go:85	End of file reached: C:\LOG_PATH\CONTINOUS_LOG.log; Backoff now.
** Writing to registry **
2018-11-19T07:15:21.161+0100	DEBUG	[harvester]	log/log.go:85	End of file reached: C:\LOG_PATH\CONTINOUS_LOG.log; Backoff now.
2018-11-19T07:15:21.161+0100	DEBUG	[transport]	transport/client.go:201	handle error: write tcp FILEBEAT_HOST:58029->LOGSTASH_HOST:5044: wsasend: Eine vorhandene Verbindung wurde vom Remotehost geschlossen.
2018-11-19T07:15:21.161+0100	DEBUG	[logstash]	logstash/async.go:142	17 events out of 17 events sent to logstash host LOGSTASH_HOST:5044. Continue sending
2018-11-19T07:15:21.161+0100	DEBUG	[logstash]	logstash/async.go:99	close connection
2018-11-19T07:15:21.161+0100	DEBUG	[transport]	transport/client.go:114	closing
2018-11-19T07:15:21.161+0100	ERROR	logstash/async.go:235	Failed to publish events caused by: write tcp FILEBEAT_HOST:58029->LOGSTASH_HOST:5044: wsasend: Eine vorhandene Verbindung wurde vom Remotehost geschlossen.
2018-11-19T07:15:21.161+0100	DEBUG	[logstash]	logstash/async.go:99	close connection
2018-11-19T07:15:22.161+0100	ERROR	pipeline/output.go:92	Failed to publish events: write tcp FILEBEAT_HOST:58029->LOGSTASH_HOST:5044: wsasend: Eine vorhandene Verbindung wurde vom Remotehost geschlossen.
2018-11-19T07:15:22.161+0100	DEBUG	[logstash]	logstash/async.go:94	connect
2018-11-19T07:15:22.161+0100	DEBUG	[logstash]	logstash/async.go:142	17 events out of 17 events sent to logstash host LOGSTASH_HOST:5044. Continue sending
** Writing to registry **
2018-11-19T07:15:23.163+0100	DEBUG	[harvester]	log/log.go:85	End of file reached: C:\LOG_PATH\CONTINOUS_LOG.log; Backoff now.
2018-11-19T07:15:27.174+0100	DEBUG	[harvester]	log/log.go:85	End of file reached: C:\LOG_PATH\CONTINOUS_LOG.log; Backoff now.

I can see filebeat working its magic and collecting the data and stating that they were sent to logstash.
Data in question actually ended up in Kibana.

Another idea I had:

Should I attempt running two filebeats? One for the continous log and one for the others, sending data on different ports to logstash.

Ah, I didn't think about the output being asynchronous. As you don't have a many logs and very large intervals of no data being published you can try to disable async publishing and enable a connection ttl. With these settings filebeat will close and reconnect every now and then. Maybe this helps:

output.logstash:
  pipelining: 0  # disable async output (also improves readability of logs)
  ttl: 60s # force a reconnect after 60s (requires "pipelining: 0")

That did the trick!

Thanks again for your help @steffens!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.