Ilm with Logstash - moving from hot-warm-cold-delete

Hello.

I am a little bit confused about the order of operation in regards to the ILM and Logstash. I am preparing the configuration before attempting to do the ingestion. I will have a lot of data and want to plan in advance if I get the concept and necessary steps right.

I want to accomplish the following tasks:

    1. All log data is indexed by Logstash directly to the hot nodes.
    1. Want to move data from hot to warm and later to cold and discard.

My steps will be:

  • 1. Create index "lifecycle hot-warm-cold-delete-180days".
  • 2. Create index template "logs_indexing".
  • 3. Create index alias "logs-000001" with write permissions.
  • 4. Enable ilm in logstash as per configuration below.

My questions:

    1. Is this a proper order of operation?
    1. Should I configure the index alias "logs-000001" in advance or the Logstash will create for me? I assume that yes, because the index name will follow the convention: "logs-2019.34", etc. so when Logstash will create the index, it will create the index plus the index_alias.
    1. As data will flow, will I get the next rolled over index named "logs-000002", "logs-000003", etc. without manual intervention having the Logstash configuration above?

logstash elasticsearch config

output {
elasticsearch {
id => "LOGS"
index => "logs-%{+xxxx.ww}"
hosts => ["localhost:9200"]
action => "index"
manage_template => "true"
template_name => "logs_indexing"
ilm_enabled => "true"
ilm_rollover_alias => "logs"
ilm_pattern => "000001"
ilm_policy => "logs_rollover_policy"
enable_metric => "true"
}
}

Where my template_name "logs_indexing" is:

{
"index_patterns": ["logs*"],
"mappings": {
"dynamic_templates": [{
"strings": {
"match_mapping_type": "string",
"mapping": {
"type": "keyword"
}
}
}, {
"integers": {
"match_mapping_type": "long",
"mapping": {
"type": "integer"
}
}
}, {
"floating_points_double": {
"match_mapping_type": "double",
"mapping": {
"type": "float"
}
}
}
]
},
"settings": {
"index": {
"number_of_shards": 1,
"number_of_replicas": 1,
"refresh_interval": "120s",
"routing.allocation.require.data": "hot",
"lifecycle.name": "hot-warm-cold-delete-180days",
"lifecycle.rollover_alias": "logs",
"codec": "best_compression"
}
}
}

My hot-warm-cold-delete-180days is:

{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_size":"50gb",
"max_age":"3d"
},
"set_priority": {
"priority": 50
}
}
},
"warm": {
"min_age": "30d",
"actions": {
"forcemerge": {
"max_num_segments": 1
},
"shrink": {
"number_of_shards": 1
},
"allocate": {
"require": {
"data": "warm"
}
},
"set_priority": {
"priority": 25
}
}
},
"cold": {
"min_age": "120d",
"actions": {
"set_priority": {
"priority": 0
},
"freeze": {},
"allocate": {
"require": {
"data": "cold"
}
}
}
},
"delete": {
"min_age": "180d",
"actions": {
"delete": {}
}
}
}
}
}

Alias index:

PUT logs-000001
{
"aliases": {
"logs":{
"is_write_index": true
}
}
}

Thank you for feedback.

I was wrong with the config.

output {
elasticsearch {
id => "LOGS"
hosts => ["localhost:9200"]
action => "index"
ilm_enabled => "true"
ilm_rollover_alias => "logs"
ilm_pattern => "000001"
ilm_policy => "logs_rollover_policy"
enable_metric => "true"
}
}

The configuration above is being used at the moment.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.