Hi, Im currently using this query in the datafeed of my jobs in ML:
{
  "bool": {
    "filter": [
      {
        "bool": {
          "should": [
            {
              "exists": {
                "field": "cpu_per"
              }
            }
          ],
          "minimum_should_match": 1
        }
      },
      {
        "bool": {
          "should": [
            {
              "match_phrase": {
                "grupo.keyword": "Datacenter"
              }
            }
          ],
          "minimum_should_match": 1
        }
      }
    ]
  }
but I still get about 18.000 partitions, so I need a way to split the documents to have less than 10000 partitions, and two jobs for the same index and detector, what would be the best way to do this?