Cant roll over index via index template?

Hey all.
im trying to set roll over in my index template.
my logstash using index template
this is my index template

> 
> {
>   "template" : "logstash-*",
>   "version" : 60001,
>   "settings" : {
>     "index.refresh_interval" : "5s"
>   },
>   "mappings" : {
>     "_default_" : {
>       "dynamic_templates" : [ {
>         "message_field" : {
>           "path_match" : "message",
>           "match_mapping_type" : "string",
>           "mapping" : {
>             "type" : "text",
>             "norms" : false
>           }
>         }
>       }, {
>         "string_fields" : {
>           "match" : "*",
>           "match_mapping_type" : "string",
>           "mapping" : {
>         "chain_array": {
>           "type": "text"
>         },
>         "client_time": {
>           "type": "date"
>         },
>         "destination_ip": {
>           "type": "ip"
>         },
>         "destination_path": {
>           "type": "text"
>         },
>         "destination_port": {
>           "type": "integer"
>         },
>         "direction": {
>           "type": "keyword"
>         },
>         "full_server_time": {
>           "type": "date"
>         },
>         "reporting_computer": {
>           "type": "text"
>         },
>         "dll_name": {
>           "type": "text"
>         },
>         "dll_path": {
>           "type": "text"
>         },
>         "mog_counter": {
>           "type": "integer"
>         },
>         "os": {
>           "type": "keyword"
>         },
>         "process_id": {
>           "type": "integer"
>         },
>         "process_name": {
>           "type": "text"
>         },
>         "process_path": {
>           "type": "text"
>         },
>         "protocol": {
>           "type": "keyword"
>         },
>         "reason": {
>           "type": "keyword"
>         },
>         "sequance_number": {
>           "type": "integer"
>         },
>         "source_ip": {
>           "type": "ip"
>         },
>         "source_port": {
>           "type": "integer"
>         },
>         "scramble_state": {
>           "type": "keyword"
>         },
>         "status": {
>           "type": "keyword"
>         },
>         "sub_sequance_number": {
>           "type": "integer"
>         },
>         "user_name": {
>           "type": "text"
>         },
>         "cast_type": {
>           "type": "keyword"
>         },
>         "counter" : {
>           "type": "integer"
>         }
>           }
>         }
>       } ],
>       "properties" : {
>         "@timestamp": { "type": "date"},
>         "@version": { "type": "keyword"},
>         "geoip"  : {
>           "dynamic": true,
>           "properties" : {
>             "ip": { "type": "ip" },
>             "location" : { "type" : "geo_point" },
>             "latitude" : { "type" : "half_float" },
>             "longitude" : { "type" : "half_float" }
>           }
>         }
>       }
>     }
>   }
> }

if i add conditions the logstash cant install the template.
there is any option to set rollover via index template instead of put to aliases?

bump.
added curator for delete old indices and delete by disk space - work.

remain : rollover index every 5gb or 1day
my logstash config

input {
tcp {
port => 5556
}
udp {
port => 5566
}
}

filter {
csv {
separator => ","
columns => [
"os","reporting_computer","client_time" ,"full_server_time" ,"process_id" ,"process_name" ,
"process_path" ,"protocol" ,"status" ,"source_port" ,"destination_port" ,"direction" ,"cast_type",
"scramble_state" ,"source_ip" ,"destination_ip" ,"sequance_number" ,"sub_sequance_number" ,"user_name" ,
"mog_counter" ,"destination_path" ,"reason" ,"dll_path" ,"dll_name" ,"chain_array"
]
}
mutate {convert => ["process_id","integer"]}
mutate {convert => ["source_port","integer"]}
mutate {convert => ["destination_port","integer"]}
mutate {convert => ["sequance_number","integer"]}
mutate {convert => ["mog_counter","integer"]}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
index => "logs-%{+YYYY.MM.dd}"
template => "C:\etc\logstash-config\index_template.json"
template_overwrite => "true"
}
}

curator settings:

> actions:
>   1:
>     action: delete_indices
>     description: >-
>       Delete index's older than X days or when reach disk space 200gb
>     options:
>       ignore_empty_list: True
>     filters:
>     - filtertype: pattern
>       kind: prefix
>       value: logs-
>     - filtertype: age
>       source: name
>       direction: older
>       timestring: '%Y.%m.%d'
>       unit: days
>       unit_count: 60
>     - filtertype: space
>       disk_space: 400
>       use_age: True
>       source: field_stats
>       field: '@timestamp'
>       stats_result: max_value
> 
> actions:
>   1:
>     action: rollover
>     description: >-
>       Rollover the index every 1gb.
>     options:
>       name: logs_write
>       conditions:
>         max_size: 5g
>         max_age: 1d

something wrong with my rollover curator - he cant find logs_Write name aliases
i added to my elastic template this alias.
there is an option via logstash to roll every 5gb or every 1day ? (already got every 1 day)

A Rollover alias should be treated different from other aliases. It will always fail if you have the rollover alias name set to be re-added in the index template.

With the Rollover API, you do not need to do this. It takes care of it for you. You create the alias once at initial index creation time, and then rollover handles it for you.

i tried to add conditions tothe index template but cant start my logstash with this configuration. its failed.

  "aliases": {
"logs_write": {}
  },
"conditions": {
"max_size":  "5gb"
  },

log :slight_smile:

`

] Failed to install template. {:message=>"Got response code '400' contacting Elasticsearch at URL 'http://localhost:9200/_template/logstash'", :class=>"LogStash::Outputs::Elasticsearch::HttpClient::Pool::BadResponseCodeError", :backtrace=>["C:/Cyber20/ELK/logstash-6.5.2/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb:80:in perform_request'", "C:/Cyber20/ELK/logstash-6.5.2/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:291:in perform_request_to_url'", "C:/Cyber20/ELK/logstash-6.5.2/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:278:in block in perform_request'", "C:/Cyber20/ELK/logstash-6.5.2/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:373:in with_connection'", "C:/Cyber20/ELK/logstash-6.5.2/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:277:in perform_request'", "C:/Cyber20/ELK/logstash-6.5.2/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:285:in block in Pool'", "C:/Cyber20/ELK/logstash-6.5.2/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client.rb:348:in template_put'", "C:/Cyber20/ELK/logstash-6.5.2/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/http_client.rb:86:in template_install'", "C:/Cyber20/ELK/logstash-6.5.2/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/template_manager.rb:21:in install'", "C:/Cyber20/ELK/logstash-6.5.2/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/template_manager.rb:9:in install_template'", "C:/Cyber20/ELK/logstash-6.5.2/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/common.rb:127:in install_template'", "C:/Cyber20/ELK/logstash-6.5.2/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.4-java/lib/logstash/outputs/elasticsearch/common.rb:49:in block in install_template_after_s

`

Ah, I see. You misunderstand the Rollover API. Those conditions are used when the API is called, not when the alias is created.

The rollover API must be called periodically to do the actual rollover. It's not automatic.

To re-iterate, you should create the index using the REST API first, then just use that in Logstash.

PUT /logs-000001 
{
  "aliases": {
    "logs_write": {}
  }
}

Then you change your Logstash to point to logs_write:

output {
  elasticsearch { 
    hosts => "http://localhost:9200"
    index => "logs_write"
    template => "C:\etc\logstash-config\index_template.json"
    template_overwrite => "true"
  }
}

You can continue to use your custom index template, but it cannot contain an alias section with the logs_write alias. Once created, you never have to touch it again. Rollover takes care of everything for you, but you must call it periodically for the rollover to happen. It's not automatic.

Review the Rollover API documentation if this is unclear.

ok thanks.
so i changed my index name at logstash config to logs-
delete the aliases from index template.

added alias

POST /_aliases
{
    "actions" : [
        {
            "add" : {
                 "index" : "logs-",
                 "alias" : "test"
            }
        }
    ]
}

i saw at http://localhost:9200/logs-?pretty=true
the alias.

my curator settings is

 action: rollover
description: >-
  Rollover the index every 5gb or 1d.
options:
  name: test
  conditions:
    max_age: 1d
    max_docs: 1000000
    max_size: 5gb

but i get an error from curator :slight_smile:

2019-01-17 08:24:58,847 INFO Preparing Action ID: 2, "rollover"
2019-01-17 08:24:58,863 INFO Trying Action ID: 2, "rollover": Rollover the index every 5gb or 1d.
2019-01-17 08:24:58,863 ERROR Failed to complete action: rollover. <class 'ValueError'>: Unable to perform index rollover with alias "test". See previous logs for more details.

This is almost certainly because logs- is not a viable rollover index. A rollover index name should be a pattern followed by a dash and an incrementable number, like index-000001. When a rollover is called, it will increment that number.

There are exceptions to this rule (e.g. using dates in the name), but even those exceptions ought to end with a dash and a number.

ok , now the rollover via curator work thanks.
its create a new index by the conditions but it still adding docs to the first one.
logstash;
index => "logs-000001"
added aliases via route _aliases with action add.
the new index called logs-000002 but the docs increase to the first index.

It seems likely that your Logstash is shipping to an index name, rather than an alias. For the procedure I shared previously to work, Logstash cannot be shipping to the alias you are creating before it is created, otherwise Logstash will create that alias name as an index name instead.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.