Dear Filebeat experts,
my filebeat log is full of the following error messages. I am wondering what goes wrong? Thanks!
2019-03-08T21:21:26Z ERR State for READER-034.log should have been dropped, but couldn't as state is not finished.
2019-03-08T21:21:26Z ERR State for READER-062.log should have been dropped, but couldn't as state is not finished.
2019-03-08T21:21:26Z ERR State for READER-003.log should have been dropped, but couldn't as state is not finished.
2019-03-08T21:21:26Z ERR State for READER-098.log should have been dropped, but couldn't as state is not finished.
2019-03-08T21:21:26Z ERR State for READER-094.log should have been dropped, but couldn't as state is not finished.
2019-03-08T21:21:26Z ERR State for READER-005.log should have been dropped, but couldn't as state is not finished.
I do not know whether this is related, we use filebeat to tail the log file and send to kafka. we observed that filebeat is really slow, we allocated two CPU processors to filebeat, however, it could not keep up with the log output. its biggest throughput in 30sec is libbeat.outputs.kafka.bytes_write=1610265, which is only 53KB/s. I am wondering is this expected?
2019-03-08T18:16:52Z INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30001 beat.memstats.gc_next=863930544 beat.memstats.memory_alloc=469037928 beat.memstats.memory_total=411464408889168 filebeat.events.active=36 filebeat.events.added=9234 filebeat.events.done=9198 filebeat.harvester.open_files=104 filebeat.harvester.running=104 libbeat.config.module.running=0 libbeat.output.events.acked=9198 libbeat.output.events.active=104 libbeat.output.events.batches=30 libbeat.output.events.total=9302 libbeat.outputs.kafka.bytes_read=35850 libbeat.outputs.kafka.bytes_write=1610265 libbeat.pipeline.clients=1 libbeat.pipeline.events.active=134 libbeat.pipeline.events.published=9234 libbeat.pipeline.events.total=9234 libbeat.pipeline.queue.acked=9198 registrar.states.current=126 registrar.states.update=9198 registrar.writes=29