ERR " write: broken pipe"

Publish event to logstash.
Error info:
2018/01/11 11:24:03.470328 async.go:235: ERR Failed to publish events caused by: write tcp 127.0.0.1:57017->127.0.0.1:5044: write: broken pipe
2018/01/11 11:24:04.471483 output.go:92: ERR Failed to publish events: write tcp 127.0.0.1:57017->127.0.0.1:5044: write: broken pipe

tks~

Hello @hidogs, I will need a bit more details about your current configuration and I have some question to understand what is currently going on.

  1. Does you see this all the time in your Filebeat logs?
  2. When this situation happen is there any network issue to talk to Logstash host?
  3. What version of Filebeat you are currently running?
  4. What version of Logstash you are currently running?

Thanks

First of all thank you for your answer.

  1. no, Sometimes it happens.
  2. I'm not sure it's a network problem.
  3. Filebeat version, filebeat version 6.1.1 (amd64), libbeat 6.1.1
  4. Logstash version, logstash 6.1.1

Filebeat config:

filebeat.prospectors:
- type: log
  paths:
    - /tmp/*.log
  tail_files: true

output.logstash:
  hosts: ["localhost:5044"]

Logstash config:

input {
    beats {
        port => "5044"
    }
}
filter{
    grok {
        match => {
           "message" => "(?<request_time>\d\d:\d\d:\d\d\.\d+)\s+\[(?<thread_id>[\w\-]+)\]\s+(?<log_level>\w+)\s+(?<class_name>\w+)\s+\-\s+\[(?<request_id>\w+)\]\s+(?<detail>.+)"
        }
    }
}

output {
    datahub {
        access_id => "***"
        access_key => "***"
        endpoint => "***"
        project_name => "***"
        topic_name => "***"
        #shard_id => "0"
        #shard_keys => ["thread_id"]
        dirty_data_continue => true
        dirty_data_file => "/tmp/dirty.data"
        dirty_data_file_max_size => 1000
    }
}

Log content:

20:04:30.359 [qtp1453606810-20] INFO AuditInterceptor - [13pn9kdr5tl84stzkmaa8vmg] end /web/v1/project/fhp4clxfbu0w3ym2n7ee6ynh/statistics?executionName=bayes_poc_test GET, 187 ms

And, I have another question.
When I modify the log file,all records will be published to logstash, not the latest record.

There are two kinds of logs.
one:
2018/01/12 02:00:54.002115 harvester.go:215: INFO Harvester started for file: /tmp/INFO_17_01_11.log
2018/01/12 02:00:54.002558 processor.go:275: DBG [publish] Publish event: {log content}
2018/01/12 02:00:54.002558 processor.go:275: DBG [publish] Publish event: {log content}
2018/01/12 02:00:54.002558 processor.go:275: DBG [publish] Publish event: {log content}
...
2018/01/12 02:00:55.009330 async.go:235: ERR Failed to publish events caused by: write tcp [::1]:61579->[::1]:5044: write: broken pipe
2018/01/12 02:00:56.014211 output.go:92: ERR Failed to publish events: write tcp [::1]:61579->[::1]:5044: write: broken pipe
2018/01/12 02:01:23.515368 metrics.go:39: INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30001 beat.memstats.gc_next=5635296 beat.memstats.memory_alloc=2855728 beat.memstats.memory_total=10677992 filebeat.events.added=11 filebeat.events.done=11 filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 libbeat.config.module.running=0 libbeat.output.events.acked=9 libbeat.output.events.batches=2 libbeat.output.events.failed=9 libbeat.output.events.total=18 libbeat.output.read.bytes=6 libbeat.output.write.bytes=596 libbeat.output.write.errors=1 libbeat.pipeline.clients=1 libbeat.pipeline.events.active=0 libbeat.pipeline.events.filtered=2 libbeat.pipeline.events.published=9 libbeat.pipeline.events.retry=18 libbeat.pipeline.events.total=11 libbeat.pipeline.queue.acked=9 registrar.states.cleanup=1 registrar.states.current=1 registrar.states.update=11 registrar.writes=3

two:
2018/01/12 02:00:54.002115 harvester.go:215: INFO Harvester started for file: /tmp/INFO_17_01_11.log
2018/01/12 02:00:54.002558 processor.go:275: DBG [publish] Publish event: {log content}
2018/01/12 02:00:54.002558 processor.go:275: DBG [publish] Publish event: {log content}
2018/01/12 02:00:54.002558 processor.go:275: DBG [publish] Publish event: {log content}
...
2018/01/12 02:04:39.469680 metrics.go:39: INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30001 beat.memstats.gc_next=5642048 beat.memstats.memory_alloc=2851072 beat.memstats.memory_total=4512256 filebeat.events.added=12 filebeat.events.done=12 filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 libbeat.config.module.running=0 libbeat.output.events.acked=10 libbeat.output.events.batches=1 libbeat.output.events.total=10 libbeat.output.read.bytes=6 libbeat.output.write.bytes=654 libbeat.pipeline.clients=1 libbeat.pipeline.events.active=0 libbeat.pipeline.events.filtered=2 libbeat.pipeline.events.published=10 libbeat.pipeline.events.retry=10 libbeat.pipeline.events.total=12 libbeat.pipeline.queue.acked=10 registrar.states.cleanup=1 registrar.states.current=1 registrar.states.update=12 registrar.writes=3

I doubt it's ack problem.
But I don't know how to handle it.

Thank you so much~ :grin:

And, I have another question. When I modify the log file,all records will be published to Logstash, not the latest record.

Usually the registry will keep track of the file offset and only send what it didn't read.

Let's finish the beats issue network problem first.

In the Logstash directory, can you give me the output of the following command: bin/logstash-plugin list --verbose beats ?

hi,
Maybe VIM caused the record to be repeated.
This log file was created by Vim. And inserting record by Vim.

When I use the following command:
echo "xxxxx" >> xx.log
The problem doesn't exist.

bin/logstash-plugin list --verbose beats show:

logstash  bin/logstash-plugin list --verbose beats
logstash-input-beats (5.0.4)

And datahub output download

I really appreciate it.
:rofl::rofl::rofl:

Hello, its because filebeat tracks inode changes to know if it needs to read the file or not. When you are editing with vim, every time that you save your file the inode change. Look at the number on the left in the following example.

ph@sashimi  /tmp  ls -li testing.log
4305672956 -rw-r--r--  1 ph  wheel  21 Jan 22 10:05 o.log
 ph@sashimi  /tmp  vim o.log
 ph@sashimi  /tmp  ls -li testing.log
4305672969 -rw-r--r--  1 ph  wheel  30 Jan 22 10:05 o.log
 ph@sashimi  /tmp 

When you are using echo, the inode won't change, this is the normal behavior when you are logging, you are appending to a file.

ph@sashimi  /tmp  ls -li echo.log
4305673010 -rw-r--r--  1 ph  wheel  2 Jan 22 10:06 echo.log
 ph@sashimi  /tmp  echo "no inode change" >> echo.log
 ph@sashimi  /tmp  ls -li echo.log
4305673010 -rw-r--r--  1 ph  wheel  18 Jan 22 10:06 echo.log
2 Likes

Concerning the write broken pipe, It can happen for a few reasons:

  • network issue
  • client was idle for a long period of time and LS decided to kill the connection.

Beats will make sure that all the events are ACK in that case.

Thank you very much for your patient answer.
I'll keep watching write broken pipe.

Thanks again.
:grin::grin::grin:

Hi,

I am new to ELK, I found the reason why this error happens.
You can add "client_inactivity_timeout => "1200"" to
input {
beats {
port => "5044"
}
}
The default value for this option is 60.
Hope it helps.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.