Panic error, "panic: name bytes_read already used"


#1

Hi,
I receive the following panic error:

panic: name bytes_read already used

goroutine 41 [running]:
panic(0x9250a0, 0xc4203279a0)
/usr/local/go/src/runtime/panic.go:500 +0x1a1
github.com/elastic/beats/libbeat/monitoring.panicErr(0xc77f00, 0xc4203279a0)
/go/src/github.com/elastic/beats/libbeat/monitoring/registry.go:227 +0x5c
github.com/elastic/beats/libbeat/monitoring.(*Registry).Add(0xc4201898c0, 0x9de67a, 0xa, 0xc7a980, 0xc420327970, 0x10101)
/go/src/github.com/elastic/beats/libbeat/monitoring/registry.go:129 +0xb2
github.com/elastic/beats/libbeat/monitoring/adapter.(*GoMetricsRegistry).doRegister(0xc42017ab60, 0x9e41b7, 0x12, 0x8ff600, 0xa29698, 0xc4202b2000, 0x7f5d51c87960)
/go/src/github.com/elastic/beats/libbeat/monitoring/adapter/go-metrics.go:130 +0x1e7
github.com/elastic/beats/libbeat/monitoring/adapter.(*GoMetricsRegistry).GetOrRegister(0xc42017ab60, 0x9e41b7, 0x12, 0x8ff600, 0xa29698, 0x20, 0x99e440)
/go/src/github.com/elastic/beats/libbeat/monitoring/adapter/go-metrics.go:103 +0x88
github.com/elastic/beats/vendor/github.com/rcrowley/go-metrics.GetOrRegisterMeter(0x9e41b7, 0x12, 0xc838a0, 0xc42017ab60, 0x1000, 0x1000)
/go/src/github.com/elastic/beats/vendor/github.com/rcrowley/go-metrics/meter.go:26 +0x6a
github.com/elastic/beats/vendor/github.com/Shopify/sarama.(*Broker).Open.func1()
/go/src/github.com/elastic/beats/vendor/github.com/Shopify/sarama/broker.go:106 +0x430
github.com/elastic/beats/vendor/github.com/Shopify/sarama.withRecover(0xc42032a3a0)
/go/src/github.com/elastic/beats/vendor/github.com/Shopify/sarama/utils.go:46 +0x43
created by github.com/elastic/beats/vendor/github.com/Shopify/sarama.(*Broker).Open
/go/src/github.com/elastic/beats/vendor/github.com/Shopify/sarama/broker.go:149 +0x15c

This is my config file:

max_procs: 3
filebeat.spool_size: 4096
filebeat.idle_timeout: 5s

filebeat.prospectors:

  • input_type: log
    paths:
    • /data/log/test.log
      json.keys_under_root: true
      fields_under_root: true
      tail_files: true
      tags: ["test_log"]
      fields:
      topic: "test_topic"

output.kafka:
hosts: ["kafkahost:9092",]
topic: "%{[topic]}"
partition.round_robin:
reachable_only: true
required_acks: 1
compression: gzip
max_message_bytes: 1000000
worker: 3
channel_buffer_size: 4096
bulk_max_size: 4096
flush_interval: 3

logging.level: info
logging.to_files: true
logging.to_syslog: false
logging.files:
path: /data/somewhere
name: filebeat.log
keepfiles: 150

And this is OS Version:
Linux ubuntu 3.2.0-93-generic x86_64

I'm using Filebeat 5.3.0.
Start filebeat: sudo ./filebeat start

I don't know why this Panic error occurred. please help me.
thanks


(Giuseppe Valente) #2

Hi,

I wasn't able to reproduce on 5.3.0. One thing I noticed is the alignment in your config seems off, but maybe it's just here? I had to reformat it this way:

max_procs: 3
filebeat.spool_size: 4096
filebeat.idle_timeout: 5s

filebeat.prospectors:
- input_type: log
  paths:
    - /data/log/test.log
  json.keys_under_root: true
  fields_under_root: true
  tail_files: true
  tags: ["test_log"]
  fields:
  topic: "test_topic"

output.kafka:
  hosts: ["kafkahost:9092"]
  topic: "%{[topic]}"
  partition.round_robin:
  reachable_only: true
  required_acks: 1
  compression: gzip
  max_message_bytes: 1000000
  worker: 3
  channel_buffer_size: 4096
  bulk_max_size: 4096
  flush_interval: 3

logging.level: info
logging.to_files: true
logging.to_syslog: false
logging.files:
  path: /data/somewhere
  name: filebeat.log
  keepfiles: 150

#3

Hi giuseppe, my config file has already formatted, that's just something wrong when I paste to the question.


(ruflin) #4

There seem to be a conflict between the metrics reported by our kafka library and our internal monitoring. The panic should not happen. Could you open a issue on Github with this?

@steffens Can you have a look at this?


(Steffen Siering) #5

Checking the stack-trace this looks like a race-condition to me. Can you open an issue on github?


(system) #6

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.