Filebeat "failed to perform any bulk index operations"

Hello,
i'm trying to fix my environment (4 Nodes with 32GB of ram and 16 cores each), i solved some performance issues, but now i still have errors, below the error logs retrieved from filebeat:

{"log.level":"error","@timestamp":"2024-12-10T01:40:44.850+0100","log.logger":"elasticsearch","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.(*Client).publishEvents","file.name":"elasticsearch/client.go","file.line":262},"message":"failed to perform any bulk index operations: Post \"https://NODE-IP:9200/_bulk?filter_path=errors%2Citems.%2A.error%2Citems.%2A.status\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)","service.name":"filebeat","ecs.version":"1.6.0"}

On elasticsearch i don't see any erorr logs, Cluster is in Green state and when i try filebeat test output and filebeat test config everything is ok, i've also did some test changing bulk_max_size, when i try to decrease (for example bulk_max_size: <=50 ( i tried 20 and 50 ) the error log changing to :

{"log.level":"warn","@timestamp":"2024-12-10T01:29:48.928+0100","log.logger":"elasticsearch","log.origin":{"function":"github.com/elastic/beats/v7/libbeat/outputs/elasticsearch.(*Client).bulkCollectPublishFails","file.name":"elasticsearch/client.go","file.line":454},"message":"Cannot index event (status=400): dropping event! Enable debug logs to view the event and cause.","service.name":"filebeat","ecs.version":"1.6.0"}

If i try to increase the bulk_max size for example at 200 or 1600 then i still see the original error mentioned first.

I'm not sure that depends on the bulk_max_size, also configuration always worked as expected, filebeat version is 8.13.2, and the honestly i don't know what i can do at this moment. this is the filebeat output configuration (commented lines was some test that i tried)

CPU now is stable at around 50% and heap memory also is around 40-65% so i finish my possible solutions :frowning:

output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["******","*****","*****","*****"]
  allow_older_versions: true
# Protocol - either `http` (default) or `https`.
  protocol: "https"
  ssl.certificate_authorities: "/etc/filebeat/certs/http_ca.crt"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "##########"
  index: my_index"

 # Custom bulk configuration
 # bulk_max_size: 1600
 # worker: 1
 # queue.mem.events: 3200
 # queue.mem.flush.min_events: 1600
 # queue.mem.flush.timeout: 10s
 # compression_level: 1
 # connection_idle_timeout: 3s

Any help would be really appreciated.

I figured out that the issue was related to a problem with my index template and filebeat that wasn't able to generate a new data stream, so the workaround was to do a manual creation of the data stream that matched the correct date:

PUT /_data_stream/my-data-stream-date

I also noticed even by default auto index creation was set in true, my index template that always worked since 5th of December was in "Allow Auto create" to "No". I have others index template that with the "Allow Auto create" set to "no"but still creates index automatically every day.

i see that this issue should be resolved from 8.3.3 version

Now i'll wait tomorrow to verify if data streams is generating as expected every day.