Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster

Hello,

I am having this error trying to send logs from fluentd/td-agent.

2019-12-20 12:46:00 +0100 [warn]: #0 failed to flush the buffer. retry_time=18 next_retry_seconds=2019-12-20 12:46:27 +0100 chunk="59a205d168e33dc874110deadf544b7d" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>"elasticsearch-logging.mirada.lab", :port=>80, :scheme=>"http"}): read timeout reached"

With next configuration:

# equal to -qq option log_level debug @type tail path /home/tania/runscope_csv/extract_data_izzi.csv pos_file /home/tania/runscope_csv/extract_data_izzi.csv.pos tag runscope_test read_from_head true @type csv keys customer,endpoint,timestamp,code_status,total_response_time,send_headers_ms,dial_ms,send_body_ms,wait_for_response_ms,dns_lookup_ms,receive_response_ms null_empty_string true types send_headers_ms:float,send_body_ms:float,wait_for_response_ms:float,dns_lookup_ms:float,receive_response_ms:float,dial_ms:float,total_response_time:float @type elasticsearch logstash_format true logstash_prefix runscope_test logstash_dateformat %Y.%m.%d # include_tag_key true include_timestamp true host elasticsearch-logging.mirada.lab port 80 index_name runscope_test # type_name _doc @type file path /var/log/td-agent/buffer/td/runscope.buffer flush_mode interval retry_type exponential_backoff flush_thread_count 16 flush_interval 10s retry_forever retry_max_interval 30 chunk_limit_size 256m queue_limit_length 256 overflow_action block #request_timeout 20s

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.