Error when trying to load lifecycle policies after updating to 7.9

Hello, after updating from elastic stack 7.4 to 7.9 when I try to go into the lifecycle policies page I receive the error:

400: Bad Request. [illegal_argument_exception] duration cannot be negative, was given [-1193023999570]

so far the only suggested I have is delete everything and then it will fix it but I would like to explore other options before having to do this.

regards,

Hi, Andrew. I'll need a little more information to understand the root of the problem. Could you please copy and paste the results of these two requests? (Copy and paste is better than screenshots because then I can use the text).

GET /_ilm/policy
GET /*/_ilm/explain

Also, do you have the ES server logs that correspond to that error? You should see a stack trace marked with a timestamp around the same time you got the error. If you could copy and paste that stack trace here, that will help me understand what's causing the error.

Hello,

thanks for your response

GET /_Ilm/policy

{
  "heartbeat-7.4.0" : {
    "version" : 2,
    "modified_date" : "2020-08-10T13:49:32.166Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "50gb",
              "max_age" : "30d"
            }
          }
        }
      }
    }
  },
  "ilm-history-ilm-policy" : {
    "version" : 1,
    "modified_date" : "2020-08-24T16:36:34.789Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "50gb",
              "max_age" : "30d"
            }
          }
        },
        "delete" : {
          "min_age" : "90d",
          "actions" : {
            "delete" : {
              "delete_searchable_snapshot" : true
            }
          }
        }
      }
    }
  },
  "elastiflow-policy" : {
    "version" : 1,
    "modified_date" : "2019-10-08T15:05:19.397Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "set_priority" : {
              "priority" : 100
            }
          }
        },
        "delete" : {
          "min_age" : "3d",
          "actions" : {
            "delete" : {
              "delete_searchable_snapshot" : true
            }
          }
        }
      }
    }
  },
  "filebeat-7.4.0" : {
    "version" : 1,
    "modified_date" : "2020-08-13T15:22:24.857Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "50gb",
              "max_age" : "30d"
            }
          }
        }
      }
    }
  },
  "watch-history-ilm-policy" : {
    "version" : 1,
    "modified_date" : "2019-08-26T12:06:35.692Z",
    "policy" : {
      "phases" : {
        "delete" : {
          "min_age" : "7d",
          "actions" : {
            "delete" : {
              "delete_searchable_snapshot" : true
            }
          }
        }
      }
    }
  },
  "kibana-event-log-policy" : {
    "version" : 1,
    "modified_date" : "2020-08-24T16:49:42.509Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "50gb",
              "max_age" : "30d"
            }
          }
        },
        "delete" : {
          "min_age" : "90d",
          "actions" : {
            "delete" : {
              "delete_searchable_snapshot" : true
            }
          }
        }
      }
    }
  },
  "metrics" : {
    "version" : 1,
    "modified_date" : "2020-08-24T16:36:34.479Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "50gb",
              "max_age" : "30d"
            }
          }
        }
      }
    }
  },
  "ml-size-based-ilm-policy" : {
    "version" : 1,
    "modified_date" : "2020-08-24T16:36:33.934Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "50gb"
            }
          }
        }
      }
    }
  },
  "logs" : {
    "version" : 1,
    "modified_date" : "2020-08-24T16:36:33.169Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "50gb",
              "max_age" : "30d"
            }
          }
        }
      }
    }
  },
  "logstash-policy" : {
    "version" : 1,
    "modified_date" : "2019-09-23T18:35:42.960Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "50gb",
              "max_age" : "30d"
            }
          }
        }
      }
    }
  },
  "slm-history-ilm-policy" : {
    "version" : 1,
    "modified_date" : "2019-10-15T14:52:17.546Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "50gb",
              "max_age" : "30d"
            }
          }
        },
        "delete" : {
          "min_age" : "90d",
          "actions" : {
            "delete" : {
              "delete_searchable_snapshot" : true
            }
          }
        }
      }
    }
  }
}

GET /*/_ilm/explain


{
  "error" : {
    "root_cause" : [
      {
        "type" : "illegal_argument_exception",
        "reason" : "duration cannot be negative, was given [-1193019028708]"
      }
    ],
    "type" : "illegal_argument_exception",
    "reason" : "duration cannot be negative, was given [-1193019028708]"
  },
  "status" : 400
}

I don't see anything in the server logs that have to do with this error

is there a specific log file I might find the information in? i tried looking in the different log files but i found no reference to the same error.

thank you

again any help would be great . deleting everything is not really an option.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.

@Andrew22 So sorry for the very late reply! Looks like the underlying GET /*/_ilm/explain request is the culprit. This might be a bug in ES, but we need to dig deeper to figure out the root cause.

Could you append this query parameter to get back the stack trace and share the result with me: GET /*/_ilm/explain?error_trace=true

If that doesn't yield any stack trace information, we'll need to update the cluster settings to emit logs for ILM:

PUT /_cluster/settings
{
  "transient": {
    "logger.org.elasticsearch.xpack.core.indexlifecycle": "DEBUG",
    "logger.org.elasticsearch.xpack.indexlifecycle": "DEBUG",
    "logger.org.elasticsearch.xpack.core.ilm": "DEBUG",
    "logger.org.elasticsearch.xpack.ilm": "DEBUG"
  }
}

With this in place, you can execute the GET /*/_ilm/explain request, and then share the output in your Elasticsearch log. If that output doesn't contain stack trace information, you will have to replace DEBUG with TRACE in the above request and try again.