Last_failure_timestamp

Hi
How I can manage reload parameter, and why this output is "null" failure_timestamp" : null
In my config I'm using

log.level: info


        "outputs" : [ {
          "id" : "1937dea243c36a25e890be2892a0097740ae9274eeb655e6e243e823a211c8a9",
          "name" : "elasticsearch",
          "events" : {
            "in" : 292516,
            "duration_in_millis" : 174113,
            "out" : 292516
          },
          "documents" : {
            "successes" : 292516
          },
          "bulk_requests" : {
            "successes" : 141,
            "responses" : {
              "200" : 1
            },
            "failures" : 5
          }
        } ]
      },
      "reloads" : {
        "last_failure_timestamp" : null,
        "successes" : 0,
        "last_error" : null,
        "last_success_timestamp" : null,
        "failures" : 0

Can you explain what you mean by manage reload parameter ?

Logstash can auto-reload the configurations when a config file or the pipelines.yml change, but you need to set it in logstash.yml.

I've meant after what time and how many times it can do a reload

If Logstash is configured to do auto reloads, it will reload every time a config file or the pipelines.yml is changed.

White auto-reload enabled Logstash check the config files and pipelines.yml for changes and if anything is changed, it will trigger a reload.

config.reload.automatic: true
config.reload.interval: 3s # default, usually 10-15 sec is fine

last_failure_timestamp - no failures=>null, normal value when everything is working.

Not sure that is true. Most timers in logstash trigger every 5 seconds, so two changes within 9.99 seconds may only trigger one reload. I may be wrong, but that what would be my starting assumption if I was going to test it (which I am not).

But how many times(attempts) logstash will shoot to Elasticsearch ?

You are right @Badger , the auto-reload interval is configurable, so I think that multiple changes between that interval will probably trigger just one reload if the end file is different from the one that logstash was running.

It is not clear what you mean with that.

Logstash will reload the configuration and resume sending data to elasticsearch, if there is something wrong with the configuration it will fail to load the new configuration until it is fixed.

we should talk about how many attempts with data logstash will try send to elastic under connection issue case

will it keep trying to send bulk of data or will there be a limit somewhere?

This has no relation with your original question, the last_failure_timestamp is related to a failure when trying to reload the pipeline nor when trying to send data to elasticsearch.

This is explained in the documentation.

HTTP requests to the bulk API are expected to return a 200 response code. All other response codes are retried indefinitely.

And about errors.

The following document errors are handled as follows:

  • 400 and 404 errors are sent to the dead letter queue (DLQ), if enabled. If a DLQ is not enabled, a log message will be emitted, and the event will be dropped. See DLQ Policy for more info.
  • 409 errors (conflict) are logged as a warning and dropped.
2 Likes

Many thanks for point out the answer

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.