How can i create logstash index every 4 hours except 1 hour

like

  output{
    elasticsearch {
        index => "logstash-%{+YYYY.MM.dd.hh}"
     }  
}

for example:-
logstash-2020.04.14.07
logstash-2020.04.14.11

Can I ask why you want them hourly?

my requirement is that i want my purging hourly.
but when i set logstash index creation to hourly then elasticsearch shards failed
because of bulk of indexes. So if i set logstash index creation in every 4-6 hours
i think elasticsearch have no issue.
this is due to in a day only 4-6 indexes create for a application if i set index creation to 4-6 hours

You should just use ILM to manage this for you, it'd be a lot easier.

i am using curator for purging

Yeah I appreciate, ILM would remove a lot of the hassle here though.

However, does your example config not work?

i think if i store date_time in variable in logstash and use it in if condition for my
but i want a standard solution from you

@warkolm how to do that ,plz help

@Dragon9 with respect, that's what @warkolm was trying to tell you. ILM is our "standard solution" for this use case—and Logstash works in conjunction with ILM.

ILM = Index Lifecycle Management.

To be completely transparent, Logstash does not create indices. It only tells Elasticsearch that document d belongs in index i. Elasticsearch creates index i if it does not exist.

So with ILM set up, then you can set up a rollover period of any time interval you like. When the index meets one of three possible conditions (max age, document count, or size), it will be rolled over. If you create the initial index with a datestamp in it (using date math), you will also have record of when the index was created in the index name itself—and all subsequent rolled-over indices will automatically have the date stamp in the same format.

I suggest reading up on ILM here before trying to shoehorn Logstash and Curator to fit this use case when ILM is both a better fit, and built-in to the Elastic Stack (and I say that as the creator and maintainer of Curator).

1 Like

@theuntergeek
when i use curator for alias and rollover.
my old index is logstash-2020.04.16-1 and
new index like logstash-2020.04.16-000002
but in new index data is not writing .how to solve this

my configuration is

     1:
        action: alias
        description: >-
         Alias indices from last week, with a prefix of kibana_sample_data_ecommerce to 'kibana_alias-000001',
         remove indices from the previous week.
        options:
          name: logst
          warn_if_no_indices: False
          disable_action: False
        add:
          filters:
          - filtertype: pattern
            kind: prefix
            value: logstash-  
     2:
        action: rollover
        description: >-
         Rollover the index associated with alias 'aliasname', which should be in the
         format of prefix-000001 (or similar), or prefix-YYYY.MM.DD-1.
        options:
          disable_action: False
          name: logst
          conditions:
            max_age: 10s

with this i also have two questions:
1.is rollover policy and index template is mandatory.
2.how to configure logstash output for push data automatically in rollover index

  1. Only if you had one before
  2. Set index => "alias_name" in your Logstash elasticsearch output block to always write to the alias.

when i do this logstash throw error

[2020-04-17T08:11:17,864][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://10.109.226.97:9200/_bulk"}
[2020-04-17T08:11:18,122][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2020-04-17T08:11:19,999][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://10.109.226.97:9200/_bulk"}
[2020-04-17T08:11:24,012][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://10.109.226.97:9200/_bulk"}
[2020-04-17T08:11:32,051][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://10.109.226.97:9200/_bulk"}
[2020-04-17T08:11:48,064][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://10.109.226.97:9200/_bulk"}
[2020-04-17T08:12:20,098][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://10.109.226.97:9200/_bulk"}
[2020-04-17T08:13:24,108][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://10.109.226.97:9200/_bulk"}

This implies there's perhaps something else in your Elasticsearch output block.

Please share that portion of your Logstash output configuration, taking care to obfuscate/redact/remove username, password, and hosts (if they're not local).

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.