Strange filebeat behavior: timeout and duplicate pushing to logstash's beats plugin

Hi,

During my migration from 5.1.2 / 5.2.0 to 6.1.x / 6.2.1 I encountered some issues. Trying to sharpen the cause of my issues I noticed strange behavior when shipping files from filebeat to logstash.

I got reproducible issues when shipping one of our logfiles to logstash 6.2.1. It doesn't matter if I use filebeat 5.6.7 or 6.2.1.

Here is my testcase:

  • start logstash
  • start filebeat
  • logfile_which_is_harvested_by_filebeat does not exist yet
  • create logfile by cat logfile_input > logfile_which_is_harvested_by_filebeat
  • the logfile has 2002 lines / events
  • check the processing in logstash

In summary my issue is the following:

  • first shipping works fine.
  • then I got timeouts in filebeat during publishing to logstash
  • filebeat is trying over and over agin
  • logstash is processing MORE input data then my INPUT is offering -> Having DUPLICATES, some kind of infinite loop.
1 Like

Here is some part of my filebeat 6.2.1 log in debug:

--> having 4004 log lines of this kind
DEBUG	[publish]	pipeline/processor.go:275	Publish event: {

--> last publish events details:
2018-03-01T18:00:26.362Z	DEBUG	[publish]	pipeline/processor.go:275	Publish event: {
  "@timestamp": "2018-03-01T18:00:26.362Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "doc",
    "version": "6.2.1"
  },
  "serverType": "tux",
  "stage": "Production",
  "beat": {
    "name": "elasticsearch-test",
    "hostname": "elasticsearch-test",
    "version": "6.2.1"
  },
  "source": "/mnt/resource/benchmark/input/useractions-anonymous.log",
  "offset": 581333,
  "message": "BNCH_E;210;2000;LS_VERSION=6.2.1;useractionlog_debug",
  "logType": "useractionlog"
}
2018-03-01T18:00:26.363Z	DEBUG	[harvester]	log/log.go:85	End of file reached: /mnt/resource/benchmark/input/useractions-anonymous.log; Backoff now.
2018-03-01T18:00:27.202Z	DEBUG	[logstash]	logstash/async.go:94	connect
2018-03-01T18:00:27.206Z	DEBUG	[prospector]	prospector/prospector.go:124	Run prospector
2018-03-01T18:00:27.206Z	DEBUG	[prospector]	log/prospector.go:147	Start next scan
2018-03-01T18:00:27.206Z	DEBUG	[prospector]	log/prospector.go:168	Prospector states cleaned up. Before: 0, After: 0
2018-03-01T18:00:27.232Z	DEBUG	[logstash]	logstash/async.go:142	2002 events out of 2002 events sent to logstash host localhost:5044. Continue sending
2018-03-01T18:00:27.364Z	DEBUG	[harvester]	log/log.go:85	End of file reached: /mnt/resource/benchmark/input/useractions-anonymous.log; Backoff now.
2018-03-01T18:00:28.208Z	DEBUG	[prospector]	prospector/prospector.go:124	Run prospector
2018-03-01T18:00:28.208Z	DEBUG	[prospector]	log/prospector.go:147	Start next scan
2018-03-01T18:00:28.208Z	DEBUG	[prospector]	log/prospector.go:168	Prospector states cleaned up. Before: 0, After: 0
2018-03-01T18:00:29.208Z	DEBUG	[prospector]	prospector/prospector.go:124	Run prospector
2018-03-01T18:00:29.208Z	DEBUG	[prospector]	log/prospector.go:147	Start next scan
2018-03-01T18:00:29.208Z	DEBUG	[prospector]	log/prospector.go:168	Prospector states cleaned up. Before: 0, After: 0
2018-03-01T18:00:29.364Z	DEBUG	[harvester]	log/log.go:85	End of file reached: /mnt/resource/benchmark/input/useractions-anonymous.log; Backoff now.
2018-03-01T18:00:30.208Z	DEBUG	[prospector]	prospector/prospector.go:124	Run prospector
2018-03-01T18:00:30.208Z	DEBUG	[prospector]	log/prospector.go:147	Start next scan
2018-03-01T18:00:30.208Z	DEBUG	[prospector]	log/prospector.go:168	Prospector states cleaned up. Before: 0, After: 0
2018-03-01T18:00:31.176Z	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":100,"time":107},"total":{"ticks":340,"time":347,"value":340},"user":{"ticks":240,"time":240}},"info":{"ephemeral_id":"f9fdc99c-0058-44f2-8712-8b36107c1653","uptime":{"ms":30025}},"memstats":{"gc_next":9761104,"memory_alloc":5481640,"memory_total":24155848,"rss":20070400}},"filebeat":{"events":{"active":2002,"added":2003,"done":1},"harvester":{"open_files":1,"running":1,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":2002,"batches":1,"total":2002},"type":"logstash","write":{"bytes":127533}},"pipeline":{"clients":26,"events":{"active":2002,"filtered":1,"published":2002,"retry":2002,"total":2003}}},"registrar":{"states":{"current":1,"update":1},"writes":2},"system":{"cpu":{"cores":4},"load":{"1":1.55,"15":1.49,"5":1.44,"norm":{"1":0.3875,"15":0.3725,"5":0.36}}}}}}
2018-03-01T18:00:31.201Z	DEBUG	[prospector]	prospector/prospector.go:124	Run prospector
2018-03-01T18:00:31.201Z	DEBUG	[prospector]	log/prospector.go:147	Start next scan
2018-03-01T18:00:31.201Z	DEBUG	[prospector]	log/prospector.go:168	Prospector states cleaned up. Before: 0, After: 0
2018-03-01T18:00:31.201Z	DEBUG	[prospector]	prospector/prospector.go:124	Run prospector

...

--> each 5 secs one of the following outputs:
|018-03-01T18:00:36.204Z|DEBUG|[prospector]|log/prospector.go:361|Check file for harvesting: /mnt/resource/benchmark/input/useractions-anonymous.log|
|---|---|---|---|---|
|2018-03-01T18:00:36.204Z|DEBUG|[prospector]|log/prospector.go:447|Update existing file for harvesting: /mnt/resource/benchmark/input/useractions-anonymous.log, offset: 581333|
|2018-03-01T18:00:36.204Z|DEBUG|[prospector]|log/prospector.go:499|Harvester for file is still running: /mnt/resource/benchmark/input/useractions-anonymous.log|
|2018-03-01T18:00:36.204Z|DEBUG|[prospector]|log/prospector.go:168|Prospector states cleaned up. Before: 1, After: 1|

or

2018-03-01T18:00:41.367Z	DEBUG	[harvester]	log/log.go:85	End of file reached: /mnt/resource/benchmark/input/useractions-anonymous.log; Backoff now.

...

--> a bit later I got often these messages:

2018-03-01T18:01:59.254Z	DEBUG	[prospector]	log/prospector.go:147	Start next scan
2018-03-01T18:01:59.254Z	DEBUG	[prospector]	log/prospector.go:168	Prospector states cleaned up. Before: 0, After: 0
2018-03-01T18:01:59.349Z	DEBUG	[transport]	transport/client.go:201	handle error: read tcp [::1]:59322->[::1]:5044: i/o timeout
2018-03-01T18:01:59.349Z	ERROR	logstash/async.go:235	Failed to publish events caused by: read tcp [::1]:59322->[::1]:5044: i/o timeout
2018-03-01T18:01:59.349Z	DEBUG	[transport]	transport/client.go:114	closing
2018-03-01T18:01:59.376Z	DEBUG	[logstash]	logstash/async.go:142	2002 events out of 2002 events sent to logstash host localhost:5044. Continue sending
2018-03-01T18:01:59.376Z	DEBUG	[logstash]	logstash/async.go:99	close connection
2018-03-01T18:01:59.376Z	DEBUG	[logstash]	logstash/async.go:99	close connection
2018-03-01T18:01:59.376Z	ERROR	logstash/async.go:235	Failed to publish events caused by: client is not connected
2018-03-01T18:02:00.255Z	DEBUG	[prospector]	prospector/prospector.go:124	Run prospector
2018-03-01T18:02:00.255Z	DEBUG	[prospector]	log/prospector.go:147	Start next scan
2018-03-01T18:02:00.255Z	DEBUG	[prospector]	log/prospector.go:168	Prospector states cleaned up. Before: 0, After: 0
2018-03-01T18:02:00.377Z	ERROR	pipeline/output.go:92	Failed to publish events: client is not connected
2018-03-01T18:02:00.377Z	DEBUG	[logstash]	logstash/async.go:94	connect
2018-03-01T18:02:00.403Z	DEBUG	[logstash]	logstash/async.go:142	2002 events out of 2002 events sent to logstash host localhost:5044. Continue sending
2018-03-01T18:02:01.176Z	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":140,"time":142},"total":{"ticks":610,"time":620,"value":610},"user":{"ticks":470,"time":478}},"info":{"ephemeral_id":"f9fdc99c-0058-44f2-8712-8b36107c1653","uptime":{"ms":120025}},"memstats":{"gc_next":10079840,"memory_alloc":7263936,"memory_total":38221080,"rss":40960}},"filebeat":{"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"batches":2,"failed":4004,"total":4004},"read":{"errors":1},"write":{"bytes":127585}},"pipeline":{"clients":26,"events":{"active":2002,"retry":6006}}},"registrar":{"states":{"current":1}},"system":{"load":{"1":1.86,"15":1.53,"5":1.58,"norm":{"1":0.465,"15":0.3825,"5":0.395}}}}}}
2018-03-01T18:02:01.212Z	DEBUG	[prospector]	prospector/prospector.go:124	Run prospector
2018-03-01T18:02:01.212Z	DEBUG	[prospector]	prospector/prospector.go:124	Run prospector
2018-03-01T18:02:01.212Z	DEBUG	[prospector]	log/prospector.go:147	Start next scan

For me it looks like that filebeat does not receive any ACK from logstash, so it is resending. I did not find any errors in logstash at loglevel INFO.

Also interesting aspect:
The problem does not occur on any logfile. Could unclean logfile (e.g. some unprintable chars inside UTF-8 or Windows-1252 charset ) could lead to such an error?

Don't know if this here is the same or only a similar issue:

I checked logstash's pipeline statistics.

"pipelines": {
		"main": {
		  "events": {
        "duration_in_millis": 4798742,
        "in": 14641,
        "out": 13125,
        "filtered": 13125,
        "queue_push_duration_in_millis": 13840779
      },
      "plugins": {
        "inputs": [{
          "id": "input:beats",
          "events": {
            "out": 14641,
            "queue_push_duration_in_millis": 13840779
          },
          "current_connections": 17,
          "name": "beats",
          "peak_connections": 17
        }],

So we can see, that there have been much more messages been transferred than my input is long.

BTW, same issue (only checked logstash output here) seems to occur if I use filebat 5.6.7 to ship to logstash 6.2.1. Using same config / logfile, filebeat 5.6.7 to ship to logstash 5.2.0 works like charme.

sorry, I reached the 7k limit for a single post, so I splitted up.

Thanks for the detailed report, it's very helpful.

What kind of filesystem is mounted in /mnt/resource? Is it possible that it is a network volume?

If that is the case, note that it is not a recommended setup (see the FAQ) because of issues like this. The inode on the file changes and that leads filebeat to believe it is a new file, sending it all over again.

1 Like
[root@elasticsearch-test butch]# mount | grep /mnt/resource
/dev/sdb1 on /mnt/resource type ext4 (rw,relatime,seclabel,data=ordered)

It's an azure VM. /mnt/resource is what they call temporary storage (can be deleted after machine is down for some time), but it is a normal storage as I understand.

I don't have the logs any longer from our productive migration which failed. I also saw timeout there in logs but the issue "feeled" pretty similar. There I used logstash 6.1.0. After that I try to reproduce and fix the errors in the above test VM.
I don't have the logs any longer from our migration on production

I am also seeing same issues with Filebeat v6.2.2-1 and Logstash v6.2.2.1.

Telnet to logstash port 5044 works fine.

telnet 192.168..26.43 5044
Trying 192.168..26.43...
Connected to 192.168..26.43.
Escape character is '^]'.

Logs from Filebeat is getting connection reset.

  },
  "message": "Mar  6 06:25:01 test-server-1 systemd[1]: Stopping User Slice of pcp.",
  "source": "/var/log/messages",
  "offset": 426796,
  "prospector": {
    "type": "log"
  }
}
2018-03-06T06:25:08.137Z        DEBUG   [harvester]     log/log.go:85   End of file reached: /var/log/messages; Backoff now.
2018-03-06T06:25:09.138Z        DEBUG   [harvester]     log/log.go:85   End of file reached: /var/log/messages; Backoff now.
2018-03-06T06:25:09.138Z        DEBUG   [transport]     transport/client.go:201 handle error: write tcp 192.168.25.253:37032->192.168.26.43:5044: write: connection reset by peer
2018-03-06T06:25:09.138Z        DEBUG   [logstash]      logstash/async.go:142   8 events out of 8 events sent to logstash host 192.168.26.43:5044. Continue sending
2018-03-06T06:25:09.138Z        DEBUG   [logstash]      logstash/async.go:99    close connection
2018-03-06T06:25:09.139Z        DEBUG   [transport]     transport/client.go:114 closing
2018-03-06T06:25:09.139Z        ERROR   logstash/async.go:235   Failed to publish events caused by: write tcp 192.168.25.253:37032->192.168.26.43:5044: write: connection reset by peer
2018-03-06T06:25:09.139Z        DEBUG   [logstash]      logstash/async.go:99    close connection
2018-03-06T06:25:10.139Z        ERROR   pipeline/output.go:92   Failed to publish events: write tcp 192.168.25.253:37032->192.168.26.43:5044: write: connection reset by peer
2018-03-06T06:25:10.139Z        DEBUG   [logstash]      logstash/async.go:94    connect
2018-03-06T06:25:11.138Z        DEBUG   [harvester]     log/log.go:85   End of file reached: /var/log/messages; Backoff now.
2018-03-06T06:25:12.139Z        ERROR   pipeline/output.go:74   Failed to connect: dial tcp 192.168.26.43:5044: getsockopt: connection refused
2018-03-06T06:25:12.140Z        DEBUG   [logstash]      logstash/async.go:94    connect
2018-03-06T06:25:13.169Z        DEBUG   [prospector]    prospector/prospector.go:124    Run prospector
2018-03-06T06:25:13.169Z        DEBUG   [prospector]    log/prospector.go:147   Start next scan
2018-03-06T06:25:13.169Z        DEBUG   [prospector]    log/prospector.go:361   Check file for harvesting: /var/log/messages
2018-03-06T06:25:13.169Z        DEBUG   [prospector]    log/prospector.go:447   Update existing file for harvesting: /var/log/messages, offset: 426796
2018-03-06T06:25:13.169Z        DEBUG   [prospector]    log/prospector.go:499   Harvester for file is still running: /var/log/messages

I have also tried Filebeat client timeout configuration as "client_inactivity_timeout => 1200"on the logstash server but still seeing same issues. Any help is appreciated.

Here is the wireshark output.

378.014727634 192.168.25.253 -> 192.168.26.43 TCP 68 36930 > lxi-evntsvc [SYN] Seq=0 Win=28056 Len=0 MSS=2004 SACK_PERM=1 WS=512
378.014813978 192.168.26.43 -> 192.168.25.253 TCP 68 lxi-evntsvc > 36930 [SYN, ACK] Seq=0 Ack=1 Win=65535 Len=0 MSS=2004 SACK_PERM=1 WS=2048
378.014910063 192.168.25.253 -> 192.168.26.43 TCP 56 36930 > lxi-evntsvc [ACK] Seq=1 Ack=1 Win=28160 Len=0
378.016845555 192.168.25.253 -> 192.168.26.43 TCP 513 36930 > lxi-evntsvc [PSH, ACK] Seq=1 Ack=1 Win=28160 Len=457
378.016872662 192.168.26.43 -> 192.168.25.253 TCP 56 lxi-evntsvc > 36930 [ACK] Seq=1 Ack=458 Win=393216 Len=0
378.146897215 192.168.26.43 -> 192.168.25.253 TCP 62 lxi-evntsvc > 36930 [PSH, ACK] Seq=1 Ack=458 Win=393216 Len=6
378.146976626 192.168.25.253 -> 192.168.26.43 TCP 56 36930 > lxi-evntsvc [ACK] Seq=458 Ack=7 Win=28160 Len=0
379.072751959 192.168.26.43 -> 192.168.25.253 TCP 56 lxi-evntsvc > 36930 [RST, ACK] Seq=7 Ack=458 Win=393216 Len=0

Thank You

Are there any updates? Could you reproduce at elastic?

In my case, the issue only occurs if I use a split filter in logstash parsing. If I uncomment the split filter in logstash filter, I cannot reproduce within 5 trials. Otherwise 4-5 of 5 trials are faulty. In the end the count of output / input messages is differing on many runs. So I think we have some kind of runtime issue here.

I don't know, how the handshake and ack algorithm between filebeat and logstash are implemented:

  • I assume filebeat needs an ack from logstash. Only if the ack has been received, the messages are deleted from filebeats output queue / offset of logfile is modified to new position.
  • Is logstash only sending the ack to filebeat if the message has been run through the output successfully? I am using logstash's memory queue.
  • If so, I think I am facing a runtime issue. Logstash may take a bit too long. To the ack comes to late for filebeat, so it is resending it.

What do you think about these thoughts?

I added

timeout: 60

to logstash output in filebeat config. Issue stays the same.

Can I assist by giving you more logs, maybe also from logstash?
My feeling tells me, that the issue's root cause is in logstash, not in filebeat. Filebeat shows only a symptom in overall system behavior. But I may be wrong

I tried to add redis as message broker (filebeat -> redis -> logstash)

Now the issue is gone.
Is it possible that the split filter in logstash may disrupt the ack mechanism in beat input?

Do you need files to reproduce?

I have exactly the same issue. I'm using 6.2.2, filebeat is a local VM and ELK is deployed on Azure VM.
018-03-16T00:21:37.054+0300 INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":125,"time":125},"total":{"ticks":1765,"time":1765,"value":1765},"user":{"ticks":1640,"time":1640}},"info":{"ephemeral_id":"c3339bb1-e08e-4161-ab83-6547f74068ea","uptime":{"ms":810072}},"memstats":{"gc_next":12484672,"memory_alloc":9399088,"memory_total":185428304,"rss":1937408}},"filebeat":{"harvester":{"open_files":11,"running":19}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"batches":4,"failed":4096,"total":4096},"read":{"errors":1},"write":{"bytes":120411}},"pipeline":{"clients":1,"events":{"active":4117,"retry":5120}}},"registrar":{"states":{"current":3}}}}}
2018-03-16T00:22:05.563+0300 ERROR logstash/async.go:235 Failed to publish events caused by: read tcp 10.225.9.49:62050->52.166.161.25:5044: i/o timeout
2018-03-16T00:22:05.563+0300 ERROR logstash/async.go:235 Failed to publish events caused by: read tcp 10.225.9.49:62050->52.166.161.25:5044: i/o timeout
2018-03-16T00:22:05.563+0300 ERROR logstash/async.go:235 Failed to publish events caused by: read tcp 10.225.9.49:62050->52.166.161.25:5044: i/o timeout
2018-03-16T00:22:05.578+0300 ERROR logstash/async.go:235 Failed to publish events caused by: client is not connected
...
as a result registry offsets are always 0 and looks like the data is being sent over and over again.
I have hundreds identical documents in elastic indexes with the only difference in _id.

Does the issue applies to all your logfiles, or is it also related to logstash's split filter in your case?

Can you provide the Logstash log while the problem is happening? It seems there may be a problem with Logstash

it's the same for all the logs.
some times it goes away, but usually I face the issue and nothing helps. Nothing suspicious in logs:

[2018-03-16T12:40:54,572][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.0.1.4:5044, remote: 31.44.80.66:63210] Received a new payload
[2018-03-16T12:40:54,573][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.0.1.4:5044, remote: 31.44.80.66:63210] Sending a new message for the listener, sequence: 1
[2018-03-16T12:40:54,574][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.0.1.4:5044, remote: 31.44.80.66:63210] Received a new payload
[2018-03-16T12:40:54,574][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.0.1.4:5044, remote: 31.44.80.66:63210] Sending a new message for the listener, sequence: 1
[2018-03-16T12:40:54,579][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.0.1.4:5044, remote: 31.44.80.66:63210] Sending a new message for the listener, sequence: 2
[2018-03-16T12:40:54,580][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.0.1.4:5044, remote: 31.44.80.66:63210] Sending a new message for the listener, sequence: 3
[2018-03-16T12:40:54,581][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.0.1.4:5044, remote: 31.44.80.66:63210] Received a new payload
[2018-03-16T12:40:54,581][DEBUG][org.logstash.beats.BeatsHandler] [local: 10.0.1.4:5044, remote: 31.44.80.66:63210] Sending a new message for the listener, sequence: 1
[2018-03-16T12:40:54,682][DEBUG][logstash.pipeline        ] filter received {"event"=>{"beat"=>{"name"=>"evg-ras-pa-1", "version"=>"6.2.2", "hostname"=>"evg-ras-pa-1"}, "prospector"=>{"type"=>"log"}, "host"=>"evg-ras-pa-1", "tags"=>["beats_input_codec_plain_applied"], "@timestamp"=>2018-03-16T12:40:33.725Z, "source"=>"c:\\ProgramData\\Parallels\\RASLogs\\controller.log", "@version"=>"1", "offset"=>5137685, "message"=>"[I 0E/00000000/T00001478] Fri Mar 16 15:40:30 2018 - Session login for user: rasuser800@testras, from IP: 10.225.9.49 was successful."}}
[2018-03-16T12:40:54,683][DEBUG][logstash.pipeline        ] filter received {"event"=>{"beat"=>{"name"=>"evg-ras-pa-1", "version"=>"6.2.2", "hostname"=>"evg-ras-pa-1"}, "prospector"=>{"type"=>"log"}, "host"=>"evg-ras-pa-1", "tags"=>["beats_input_codec_plain_applied"], "@timestamp"=>2018-03-16T12:40:33.725Z, "source"=>"c:\\ProgramData\\Parallels\\RASLogs\\controller.log", "@version"=>"1", "offset"=>5138421, "message"=>"[I 06/0000000A/T00001478] Fri Mar 16 15:40:30 2018 - User rasuser800@testras, Client html5-ef73545a, Address 10.225.9.49 wants to open \"#5\" Application, Server EVG-WS2016-3:3389 is available"}}
[2018-03-16T12:40:54,684][DEBUG][logstash.pipeline        ] filter received {"event"=>{"beat"=>{"name"=>"evg-ras-pa-1", "hostname"=>"evg-ras-pa-1", "version"=>"6.2.2"}, "prospector"=>{"type"=>"log"}, "host"=>"evg-ras-pa-1", "tags"=>["beats_input_codec_plain_applied"], "@timestamp"=>2018-03-16T12:40:41.772Z, "source"=>"c:\\ProgramData\\Parallels\\RASLogs\\controller.log", "@version"=>"1", "offset"=>5138961, "message"=>"[I 0E/00000000/T000015F0] Fri Mar 16 15:40:40 2018 - Session login for user: rasuser800@testras, from IP: 10.225.9.49 was successful."}}
[2018-03-16T12:40:54,684][DEBUG][logstash.filters.grok    ] Running grok filter {:event=>#<LogStash::Event:0x631cf0c7>}
[2018-03-16T12:40:54,686][DEBUG][logstash.pipeline        ] filter received {"event"=>{"beat"=>{"name"=>"evg-ras-pa-1", "hostname"=>"evg-ras-pa-1", "version"=>"6.2.2"}, "prospector"=>{"type"=>"log"}, "host"=>"evg-ras-pa-1", "tags"=>["beats_input_codec_plain_applied"], "@timestamp"=>2018-03-16T12:40:33.725Z, "source"=>"c:\\ProgramData\\Parallels\\RASLogs\\controller.log", "@version"=>"1", "offset"=>5138037, "message"=>"[I 06/00000034/T00001478] Fri Mar 16 15:40:30 2018 - Resource LB User 'rasuser800' Server 'EVG-WS2016-3' Session state: 6 - Reconnected to an Active Session (Username Match)\r"}}
[2018-03-16T12:40:54,687][DEBUG][logstash.pipeline        ] filter received {"event"=>{"beat"=>{"name"=>"evg-ras-pa-1", "hostname"=>"evg-ras-pa-1", "version"=>"6.2.2"}, "prospector"=>{"type"=>"log"}, "host"=>"evg-ras-pa-1", "tags"=>["beats_input_codec_plain_applied"], "@timestamp"=>2018-03-16T12:40:40.745Z, "source"=>"c:\\ProgramData\\Parallels\\RASLogs\\controller.log", "@version"=>"1", "offset"=>5138691, "message"=>"[I 0E/00000000/T00000EAC] Fri Mar 16 15:40:40 2018 - Session login for user: rasuser800@testras, from IP: 10.225.9.49 was successful."}}
[2018-03-16T12:40:54,689][DEBUG][logstash.filters.grok    ] Event now:  {:event=>#<LogStash::Event:0x631cf0c7>}

and this is what I see in filebeat log:

|2018-03-16T15:40:53.955+0300|ERROR|logstash/async.go:235|Failed to publish events caused by: read tcp 10.225.9.49:63171->52.166.161.25:5044: wsarecv: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.|
|---|---|---|---|
|2018-03-16T15:40:53.955+0300|ERROR|logstash/async.go:235|Failed to publish events caused by: read tcp 10.225.9.49:63171->52.166.161.25:5044: wsarecv: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.|
|2018-03-16T15:40:53.955+0300|ERROR|logstash/async.go:235|Failed to publish events caused by: read tcp 10.225.9.49:63171->52.166.161.25:5044: wsarecv: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.|
|2018-03-16T15:40:53.955+0300|ERROR|logstash/async.go:235|Failed to publish events caused by: client is not connected|
|2018-03-16T15:40:54.958+0300|ERROR|pipeline/output.go:92|Failed to publish events: client is not connected|

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.