Search_phase_execution_exception all shards failed (filebeat + netflow)

Hello ,
I know it is a well known issue , but after looking on others solutions can not solve it.
Im running filebeat with netflow module.
It was working ok , but now I have this meesage on discover kibana interface:

Bad Request
search_phase_execution_exception
all shards failed

I would like to properly get rid of shards issues , im learning with a single node cluster and my info is not critical, I just want it to work.

looking at cluster health , I got:

{
  "cluster_name" : "elasticsearch",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 18,
  "active_shards" : 18,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 1,                  << I think here is the problem
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 94.73684210526315
}

But when listing shards to remove this , I noticed its name is same than my index.
Take a look:

GET _cat/indices
yellow open filebeat-7.9.0                     c2H1WoQJQxaHT7uboPUSYA 1 1 506556  0 513.8mb 513.8mb
GET _cat/shards
filebeat-7.9.0                     0 p STARTED    506556 513.8mb 172.30.6.113 ubuntu-elk
filebeat-7.9.0                     0 r UNASSIGNED                             

I already tryed removing unassigned shards doing:
[root@localhost ~]#curl -XGET http://localhost:9200/_cat/shards | grep UNASSIGNED | awk {'print $1'} | xargs -i curl -XDELETE "http://localhost:9200/{}"

I think it is not working , since , both , mi index and shard have same name.
So filebeat-7.9. index is deleted but then it is created again with the unassigned shard.

So ....
I dont know how to properly continue working with my elastic stack,
Is it possible to delete shard by id ?
Is there other way to work without have shards issues ?
Any advice would be wellcome.
Leandro.