ILM rollover for "Logstash Elasticsearch output plugin"

Hello

My logstash code uses the elasticsearch output plugin to ingest docs into elasticsearch . However the index size tends to get larger in size , i came across ILM concepts in the elasticsearch documents, that says the below Note:

" When you enable index lifecycle management for Beats or the Logstash Elasticsearch output plugin, lifecycle policies are set up automatically. You do not need to take any other actions. You can modify the default policies through Kibana Management or the ILM APIs."

Since it says not to do anything, is it possible to rollover an index that reaches a certain size ?

You can yeah.

Can you share your Logstash config?

Hi @warkolm ,

Here's the logstash config ,

input {
file {
path => "/app/input/ghost*.*"
mode => "read"
file_completed_action => "log"
file_completed_log_path => "/app/processed/output.txt"
}
}

filter {
mutate
{
remove_field => ["host","@version","Hour","Min","Sec"]
}

if "cpu usage = " not in [message] and "suspendflag" not in [message] 
{
drop {}
}

if "cpu usage = " in [message]
{
grok {
patterns_dir => ["/etc/logstash/patterns"]
match => {"message" => "%{mon:GM_month}-%{DOM:GM_Day} %{hour:GM_Hour}:%{min:GM_Min}:%{sec:GM_Sec} %{DATA:ghostmon_data} %{DATA:ghostmon_data}%{DATA:ghostmon_data}= %{BASE16FLOAT:GM_cpu_pct}%{GREEDYDATA:rest}"}
match => {"path" => "%{GREEDYDATA:ghostmon_data}/%{GREEDYDATA:Ghostmon_log_filename}"}
break_on_match => false
}
mutate 
{
remove_field => ["path","ghostmon_data","GM_month","GM_Day","GM_Hour","GM_Min","GM_Sec","message","GM_cpu_pct","rest"]
add_field => {"GM_Time" => "%{GM_month}/%{GM_Day} %{GM_Hour}:%{GM_Min}:%{GM_Sec}"}
add_field => {"GM_CPU_Percentage" => "%{GM_cpu_pct}%"}
}
mutate
{
convert => {"GM_CPU_Percentage" => "integer"}
}
date 
{
match => ["GM_Time","MM/dd HH:mm:ss"]
target => "@timestamp"
remove_field => ["GM_Time"]
}
}


if "suspendflag" in [message]
{
grok{
patterns_dir => ["/etc/logstash/patterns"]
match => {"message" => "%{mon:month}-%{DOM:Day} %{hour:Hour}:%{min:Min}:%{sec:Sec} %{DATA:ghostmon_data} %{DATA:ghostmon_data}] %{DATA:ghostmon_data}=%{WORD:ghostmon_data}	%{DATA:ghostmon_data}=%{DATA:ghostmon_data}	%{DATA:ghostmon_data}=%{NUMBER:ghostmon_hits}	%{DATA:ghostmon_data}=%{NUMBER:GM_Suspend_Flag}%{GREEDYDATA:rest}"}
match => {"path" => "%{GREEDYDATA:ghostmon_data}/%{GREEDYDATA:Ghostmon_log_filename}"}
break_on_match => false
}
mutate {
remove_field => ["path","ghostmon_data","message","cpu_pct","rest","month","Day","Hour","Min","Sec"]
add_field => {"GM_Time" => "%{month}/%{Day} %{Hour}:%{Min}:%{Sec}"}
add_field => {"GM_Hour:Min:Sec" => "%{Hour}:%{Min}:%{Sec}"}
add_field => {"GM_Month/Day" => "%{month}/%{Day}"}
}
date
{
match => ["GM_Time","MM/dd HH:mm:ss"]
target => "@timestamp"
remove_field => ["GM_Time"]
}
### Sample flag definition , not official
ruby {
        code => '
            flag = event.get("GM_Suspend_Flag").to_i
            flags = []
            if 0 == flag  ; flags << "0: No flag set" ;end
	    if 0 != flag & 1 ; flags << "1: Manual Suspend" ;end
	    event.set("Flag_Definition", flags)
        '
    }
}
}
output {

 elasticsearch { hosts => ["172.26.207.164:9200"]
			index => "logstash-ghostmon"
			user => "elastic"
			password => "${ES_PWD}"
			action => "create"
		}
#Confirm if the log does get stashed!	 
 stdout {}
}


Ok so check out Manage existing indices | Elasticsearch Guide [8.11] | Elastic

thanks a lot @warkolm :slight_smile: , i'll go through this and post here , in case of questions .

1 Like

The ILM policy + index template helped create new indices , but i dont see data being written to the newer indices . am i missing something ?

so looks like i'm missing a shard allocation ? My original index has this message

"Action status
Waiting for all shard copies to be active"

A quick allocation stats on the API shows this message

shards disk.indices disk.used disk.avail disk.total disk.percent host                   ip             node
    29        6.9gb    10.2gb     38.7gb     48.9gb           20 nexus-dev01.test.com 172.26.207.164 Nexus-dev01
    13                                                                                                 UNASSIGNED

Is this the reason the data isn't rolling over to newer indices ? if yes, how do i fix it? The whole stack is installed in a single machine (single node?)

Thanks in advance

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.