Setting different ilm policy based on index name

I use "indices" syntax to create a different indexes based on a module name.
I want to apply different ilm policy based on index name?
How can I achieve that?

  indices:
    - index: "filebeat-netflow-%{+yyyy.MM.dd}"
      when.equals:
        event.module: "netflow"

    - index: "filebeat-cisco-%{+yyyy.MM.dd}"
      when.equals:
        event.module: "cisco"

I am currently on version 7.7.0

You can define 2 index templates in elasticsearch.
In the index template you can define the ilm policy which should be used, see here

Thank you for your repley. After reading It I connected the dots and did the following.

1.) disabled ilm in filebeat

#setup.ilm.enabled: auto
#setup.ilm.rollover_alias: "filebeat"
#setup.ilm.pattern: "%{now/d}-rolled"
#setup.ilm.check_exists: true
#setup.ilm.policy_file: /etc/filebeat/ilm_policy_cisco.txt
#setup.ilm.overwrite: true

2.) copied the filebeat template and change the
index pattern and ILM policy accordingly

"index": {
    "lifecycle": {
      "name": "filebeat-netflow",
      "rollover_alias": "filebeat-netflow

3.) Created new policy and use "action" button to assign it to policy (maybe this was already done but I wanted to be sure)

After deleting indices I don't see the copy of idecies that are already rolled and throwing error in the dashboards.

I hope it will work now. I will see tomorrow.

I still get

"type" : "illegal_argument_exception",
        "reason" : "index.lifecycle.rollover_alias [filebeat-cisco] does not point to index [filebeat-cisco-2020.05.27]"

but i think adding -000001 to index will resolve the issue

Nooo it still does thrown the same error "illegal_argument_exception"

  indices:
    - index: "filebeat-netflow-%{+yyyy.MM.dd}-000001"
      when.equals:
        event.module: "netflow"

    - index: "filebeat-cisco-%{+yyyy.MM.dd}-000001"
      when.equals:
        event.module: "cisco"

I think i missing step 3 bootstraping. Can I do It from kibana interface?

After I had added aliases to the temples error disappeared but not for long :confused:

My elasticsearch logs says

java.lang.IllegalArgumentException: source alias [filebeat-cisco] does not point to a write index

At this point, I am sure that this error occur because there is no

"is_write_index": true

in my template

but how can I add this? Doing that through edit template step 4 via kibana is not working.

You do not need to specify a alias in the template.
All you need to define in the template is described here

So just delete the alias section from your template.

Please share as much information as possible, so your Filebeat output section, the template and the ilm policy would be very useful

Hi Simon,

so the 1.-st step says "Create a lifecycle policy"

My policy "filebeat-cisco_policy"

PUT _ilm/policy/filebeat-cisco_policy
{
  "policy": {
    "phases": {
      "hot": {
        "min_age": "0ms",
        "actions": {
          "rollover": {
            "max_age": "1d",
            "max_size": "200gb"
          },
          "set_priority": {
            "priority": 100
          }
        }
      },
      "cold": {
        "min_age": "7d",
        "actions": {
          "set_priority": {
            "priority": 25
          }
        }
      },
      "delete": {
        "min_age": "30d",
        "actions": {
          "delete": {}
        }
      }
    }
  }
}

2-nd Create an index template
I am using default Filebeat template with minor changes in "Index patterns" field

{
  "index": {
    "lifecycle": {
      "name": "filebeat-cisco_policy",
      "rollover_alias": "filebeat-cisco"
    },
    "mapping": {
      "total_fields": {
        "limit": "10000"
      }
    },
    "refresh_interval": "5s",
    "number_of_shards": "2",
    "query": {
      "default_field": [
        "message",
        "tags",
        "agent.ephemeral_id",
        "agent.id",
        "agent.name",
        "agent.type",
        "agent.version",
        "as.organization.name",
        "client.address",
        "client.as.organization.name",
        "client.domain",
        "client.geo.city_name",
        "client.geo.continent_name",
        "client.geo.country_iso_code",
        "client.geo.country_name",
        "client.geo.name",
        "client.geo.region_iso_code",
        "client.geo.region_name",
        "client.mac",
        "client.registered_domain",
        "client.top_level_domain",
        "client.user.domain",
        "client.user.email",
        "client.user.full_name",
        "client.user.group.domain",
        "client.user.group.id",
        "client.user.group.name",
        "client.user.hash",
        "client.user.id",
        "client.user.name",
        "cloud.account.id",
        "cloud.availability_zone",
        "cloud.instance.id",
        "cloud.instance.name",
        "cloud.machine.type",
        "cloud.provider",
        "cloud.region",
        "container.id",
        "container.image.name",
        "container.image.tag",
  ...deleted
        "fields.*"
      ]
    }
  }
}

3.-rd Bootstrap an index as my case it's done by filebeat

My Filebeat.yml

output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["ip:9200"]
  username: "elastic"
  password: "password"
  ssl.certificate_authorities: ["/etc/elasticsearch/certs/ca/ca.crt"]
  ssl.certificate: "/etc/elasticsearch/certs/filebeat/filebeat.crt"
  ssl.key: "/etc/elasticsearch/certs/filebeat/filebeat.key"
 # Protocol - either `http` (default) or `https`.
  protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  indices:
    - index: "filebeat-netflow-%{+yyyy.MM.dd}-000001"
      when.equals:
        event.module: "netflow"

    - index: "filebeat-cisco-%{+yyyy.MM.dd}-000001"
      when.equals:
        event.module: "cisco"

I have deleted aliases from the template and I deleted the indices. When I got error again I will add error message.

There is still some issue.

image

./log/elasticsearch/my-application.log

[2020-05-28T09:06:17,440][INFO ][o.e.x.i.IndexLifecycleRunner] [node-1] policy [filebeat-netflow_policy] for index [filebeat-netflow-2020.05.28-000001] on an error step due to a transitive error, moving back to the failed step [check-rollover-ready] for execution. retry attempt [21]
[2020-05-28T09:06:17,443][INFO ][o.e.x.i.IndexLifecycleRunner] [node-1] policy [filebeat-cisco_policy] for index [filebeat-cisco-2020.05.28-000001] on an error step due to a transitive error, moving back to the failed step [check-rollover-ready] for execution. retry attempt [27]
[2020-05-28T09:16:17,440][ERROR][o.e.x.i.IndexLifecycleRunner] [node-1] policy [filebeat-netflow_policy] for index [filebeat-netflow-2020.05.28-000001] failed on step [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}]. Moving to ERROR step
java.lang.IllegalArgumentException: index.lifecycle.rollover_alias [filebeat-netflow] does not point to index [filebeat-netflow-2020.05.28-000001]
        at org.elasticsearch.xpack.core.ilm.WaitForRolloverReadyStep.evaluateCondition(WaitForRolloverReadyStep.java:104) [x-pack-core-7.7.0.jar:7.7.0]
        at org.elasticsearch.xpack.ilm.IndexLifecycleRunner.runPeriodicStep(IndexLifecycleRunner.java:173) [x-pack-ilm-7.7.0.jar:7.7.0]
        at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggerPolicies(IndexLifecycleService.java:329) [x-pack-ilm-7.7.0.jar:7.7.0]
        at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggered(IndexLifecycleService.java:267) [x-pack-ilm-7.7.0.jar:7.7.0]
        at org.elasticsearch.xpack.core.scheduler.SchedulerEngine.notifyListeners(SchedulerEngine.java:183) [x-pack-core-7.7.0.jar:7.7.0]
        at org.elasticsearch.xpack.core.scheduler.SchedulerEngine$ActiveSchedule.run(SchedulerEngine.java:211) [x-pack-core-7.7.0.jar:7.7.0]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
        at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
        at java.lang.Thread.run(Thread.java:832) [?:?]
[2020-05-28T09:16:17,443][ERROR][o.e.x.i.IndexLifecycleRunner] [node-1] policy [filebeat-cisco_policy] for index [filebeat-cisco-2020.05.28-000001] failed on step [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}]. Moving to ERROR step
java.lang.IllegalArgumentException: index.lifecycle.rollover_alias [filebeat-cisco] does not point to index [filebeat-cisco-2020.05.28-000001]
        at org.elasticsearch.xpack.core.ilm.WaitForRolloverReadyStep.evaluateCondition(WaitForRolloverReadyStep.java:104) [x-pack-core-7.7.0.jar:7.7.0]
        at org.elasticsearch.xpack.ilm.IndexLifecycleRunner.runPeriodicStep(IndexLifecycleRunner.java:173) [x-pack-ilm-7.7.0.jar:7.7.0]
        at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggerPolicies(IndexLifecycleService.java:329) [x-pack-ilm-7.7.0.jar:7.7.0]
        at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggered(IndexLifecycleService.java:267) [x-pack-ilm-7.7.0.jar:7.7.0]
        at org.elasticsearch.xpack.core.scheduler.SchedulerEngine.notifyListeners(SchedulerEngine.java:183) [x-pack-core-7.7.0.jar:7.7.0]
        at org.elasticsearch.xpack.core.scheduler.SchedulerEngine$ActiveSchedule.run(SchedulerEngine.java:211) [x-pack-core-7.7.0.jar:7.7.0]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
        at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
        at java.lang.Thread.run(Thread.java:832) [?:?]

So how can I make It work?

Elasticsearch version is 7.7

Any idea? I real would like to make this work.

So basically after deleting rollover part in ilm policy it all started to work as I wanted

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.