How to limit the storage size of my indeces and automatically generate new index after reaching the assign storage size?

Good Day ELK masters, just want to ask on how can i limit my indeces/day storage size into i.e. 2gb? Please can somene help me i already created index life cycle policy but still it doesn't generate new index when exceed by 2gb

please can someone help me. TIA

Hi @renatoa12

First, please be patient. This is a community forum. Please do not ping multiple times, especially after only 1 hour In my experience, sometimes that'll actually have the opposite effect on getting help. There are many questions here and yours is no more important than any other topic.

Can you provide the actual ILM policy you defined? We can't help without seeing it.

Second, it seems like the index is still less than 2 GB, so I'm unclear why you're concerned it's not working?

Please post the ILM policy and perhaps we can help.

Go to Kibana- Dev Tools

And use this API and show us what your ILM policy is

Also as a note ILM is a background process and the data sizes may not be exact .. especially with small sizes such as 1 or 2 GB ... I would think +/- 5% or so... as indices / setting larger that percentage is small.. NOTE this is just an estimate from experience.

Here are some thoughts on the topic ...I posted a bit about it here and another elastician here

HI @stephenb ,

Sorry sir for being impatient :slight_smile: , btw for the sake of testing i tried to set the maximum storage size into 1mb but it still increasing and not creating/generating another index after reaching the assign maximum storage size.

Yeah 1mb won't work... That is what I am trying to tell you...

We see this over and over the people try to test which such tiny values that's not what ILMs for.

I linked to some post explaining.

Also, you haven't actually posted your ILM or your index template that shows which ILM policy is being used So we really can't debug it unless you actually provide the actual configurations.

Hi sir @stephenb ,

Thanks for the reply, noted sir does it work if i change the maximum storage size into 1gb?? Btw i provided screenshots for my settings

Index Lifecycle Policies:

Index Template:
image

Index Management:


What size did it rollover at?

HI @stephenb ,

There is no specific size it rollover per day it even reach at the size of 80gb+ thats why i want to limit the maximum storage size into 50gb only but still it doesnt work

I am still confused... your index size you show in the image is 1.09GB that is but you say it is greater that 80GB? does not make sense to me but I will let that go.. .

Please provide output section of your logstash pipeline.... in formatted text not a screen shot

Also did you bootstrap the initial index with the write alias? this ? I suspect not .. meaning you are not writing to the write alias which means the index will never roll over.

PUT big-ip-waf-logs-2022.05.08-000001
{
  "aliases": {
    "logstash": {
      "is_write_index": true
    }
  }
}
1 Like

Hi @stephenb,

Thanks for the reply!
Please see below how i configure my logstash.conf, as you can see my index name is "big_ip_waf_logs-%{+YYY.MM.dd}-000001" this is to automatically generate the name same with the name i provide when running the write alias code.

Note: i only set maximum storage size into 400kb just for the testing purposes on my lab.

Logstash.conf:

input {
  udp {
    port => 514
    type => syslog
  }
}


filter {
  if [type] == "syslog" {
   mutate {
                gsub => ["message","\"",""]
    }
    csv {
                separator => "#"
        columns => [
                        "header",
                        "geo_location",
                        "ip_address_intelligence",
                        "src_port",
                        "dest_ip",
                        "dest_port",
                        "protocol",
                        "method",
                        "uri",
                        "x_forwarded_for_header_value",
                        "request_status",
                        "support_id",
                        "session_id",
                        "username",
                        "violations",
                        "violation_rating",
                        "attack_type",
                        "query_string",
                        "policy_name",
                        "sig_ids",
                        "sig_names",
                        "sig_set_names",
                        "severity",
                        "request",
                        "violation_details"

                ]
    }
    grok {
       match => { "header" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} ASM:%{IP:source_ip}" }
    }
    mutate {
                remove_field => [ "message", "header" ]
    }
    mutate {
                gsub => ["sig_set_names", "},{", "}#{"]
    }
date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}


output {
  elasticsearch { 
	hosts => ["localhost:9200"] 
	index => "big_ip_waf_logs-%{+YYY.MM.dd}-000001"
}
}

I also created write alias as you mention. take note that i run this first before generating indices so that it will create index first with the same format of my index configure in logstash.conf.

PUT big_ip_waf_logs-2022.05.08-000001
{
  "aliases": {
    "logstash": {
      "is_write_index": true
    }
  }
}

It generated/rollover when tried to execute retry lifecycle policy. But i notice that it still continue to add the data into my first index which is "big_ip_waf_logs-2022.05.10-000001" not in my"...-000002" index.



So the logstash output should be writing to the write alias not the actual index.

Cleanup and try again

output {
  elasticsearch { 
	hosts => ["localhost:9200"] 
	index => "logstash" <! ----THIS NEEDS TO BE ThE WRITE ALIAS not the concrete / actual index
}

and as a reminder again 400KB will not really work ... but I think I get it...

Start the process with the write alias above then force the rolloover I think it will work the way you want.

1 Like

Hi @stephenb,

Thank you very much sir, its now generated the 2nd index :slight_smile: i just change the index into write alias as you mention in your previous comment, also i change the name of my writealias match with my index name to avoid issue :slight_smile:

PUT logstash-000001 //name should start with your index name(logstash) in logstash.conf file
{
  "aliases": {
    "logstash": {
      "is_write_index": true
    }
  }
}

God bless you sir! hope you will continue helping newbies like me :slight_smile:
Thanks
Renato

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.