Hi Team,
Why I am Getting the below Error
2024-02-26T14:40:53.019+0700 ERROR [elasticsearch] elasticsearch/client.go:226 failed to perform any bulk index operations: Post "https://x.x.x.x:9200/_bulk": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2024-02-26T14:40:53.019+0700 INFO [publisher] pipeline/retry.go:219 retryer: send unwait signal to consumer
2024-02-26T14:40:53.019+0700 INFO [publisher] pipeline/retry.go:223 done
2024-02-26T14:40:53.019+0700 INFO [publisher_pipeline_output] pipeline/output.go:143 Connecting to backoff(elasticsearch(https://x.x.x.x:9200))
2024-02-26T14:40:53.019+0700 INFO [publisher] pipeline/retry.go:219 retryer: send unwait signal to consumer
2024-02-26T14:40:53.020+0700 INFO [publisher] pipeline/retry.go:223 done
2024-02-26T14:40:53.020+0700 INFO [publisher_pipeline_output] pipeline/output.go:143 Connecting to backoff(elasticsearch(https://x.x.x.x:9200))
2024-02-26T14:40:53.020+0700 INFO [publisher] pipeline/retry.go:219 retryer: send unwait signal to consumer
2024-02-26T14:40:53.020+0700 INFO [publisher] pipeline/retry.go:223 done
2024-02-26T14:40:53.020+0700 INFO [publisher_pipeline_output] pipeline/output.go:143 Connecting to backoff(elasticsearch(https://1x.x.x.x:9200))
2024-02-26T14:40:53.020+0700 INFO [publisher] pipeline/retry.go:219 retryer: send unwait signal to consumer
2024-02-26T14:40:53.020+0700 INFO [publisher] pipeline/retry.go:223 done
2024-02-26T14:40:53.031+0700 INFO [esclientleg] eslegclient/connection.go:282 Attempting to connect to Elasticsearch version 7.17.6
2024-02-26T14:40:53.033+0700 INFO [esclientleg] eslegclient/connection.go:282 Attempting to connect to Elasticsearch version 7.17.6
2024-02-26T14:40:53.035+0700 INFO [index-management] idxmgmt/std.go:261 Auto ILM enable success.
2024-02-26T14:40:53.083+0700 INFO [esclientleg] eslegclient/connection.go:282 Attempting to connect to Elasticsearch version 7.17.6
2024-02-26T14:40:53.101+0700 INFO [index-management.ilm] ilm/std.go:170 ILM policy firewall_ilm exists already.
2024-02-26T14:40:53.152+0700 INFO [index-management.ilm] ilm/std.go:126 Index Alias smartfren-firewall exists already.
2024-02-26T14:40:53.191+0700 INFO [publisher_pipeline_output] pipeline/output.go:151 Connection to backoff(elasticsearch(https://x.x.x.x:9200)) established
2024-02-26T14:40:53.196+0700 INFO [index-management] idxmgmt/std.go:261 Auto ILM enable success.
2024-02-26T14:40:53.260+0700 INFO [index-management.ilm] ilm/std.go:170 ILM policy firewall_ilm exists already.
2024-02-26T14:40:53.274+0700 INFO [index-management.ilm] ilm/std.go:126 Index Alias smartfren-firewall exists already.
2024-02-26T14:40:53.322+0700 ERROR [publisher_pipeline_output] pipeline/output.go:180 failed to publish events: Post "https://x.x.x.x:9200/_bulk": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2024-02-26T14:40:53.429+0700 INFO [publisher_pipeline_output] pipeline/output.go:151 Connection to backoff(elasticsearch(https://x.x.x.x:9200)) established
2024-02-26T14:40:53.433+0700 INFO [index-management] idxmgmt/std.go:261 Auto ILM enable success.
2024-02-26T14:40:53.468+0700 INFO [index-management.ilm] ilm/std.go:170 ILM policy firewall_ilm exists already.
2024-02-26T14:40:53.486+0700 INFO [index-management.ilm] ilm/std.go:126 Index Alias smartfren-firewall exists already.
2024-02-26T14:40:53.548+0700 INFO [publisher_pipeline_output] pipeline/output.go:151 Connection to backoff(elasticsearch(https://x.x.x.x:9200)) established
2024-02-26T14:40:54.331+0700 ERROR [publisher_pipeline_output] pipeline/output.go:180 failed to publish events: Post "https://x.x.x.x:9200/_bulk": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2024-02-26T14:40:57.123+0700 INFO [monitoring] log/log.go:184 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":94743710,"time":{"ms":2324}},"total":{"ticks":517486550,"time":{"ms":14015},"value":517486550},"user":{"ticks":422742840,"time":{"ms":11691}}},"handles":{"limit":{"hard":65536,"soft":65536},"open":201},"info":{"ephemeral_id":"b4cf11fd-f5b8-4b49-b6bf-74b2e9a3899b","uptime":{"ms":576750953},"version":"7.16.2"},"memstats":{"gc_next":2912234304,"memory_alloc":1528311144,"memory_total":79227890298288,"rss":5171474432},"runtime":{"goroutines":715}},"filebeat":{"events":{"added":67584,"done":67584},"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":1}},"output":{"events":{"acked":106944,"active":124163967,"batches":133,"total":149440},"read":{"bytes":28066588,"errors":67},"write":{"bytes":147279359}},"pipeline":{"clients":1,"events":{"active":668661,"published":67584,"retry":112768,"total":67584},"queue":{"acked":67584}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":0.43,"15":0.37,"5":0.43,"norm":{"1":0.0538,"15":0.0463,"5":0.0538}}}}}}
Because of Retry attempts losing Events data in Dashboard from File beat to Elasticsearch nodes, can anyone have any idea let me know .