Filebeat process different log paths and write data to seperate index,Without use of logstash and follow ILM/rollover alias defined in template

Hello All,

I've a requirement where I will be having diffrent log path defined in server and Filebeat will read this paths and should write the data to there respective elastic index.
The ILM policy and required rollover alias is defined in INDEX template settings.
I'm unable to get any data/segregate the data to different indices after several attempts.

Summarize: Single filebeat.yml should process diffrent log path and write data to diffrent inidices, also follow the ILM policy and rollover index pattern needed(rollover alias) define in template.

Template setting: (Note rollover alias are different for both template)

PUT _index_template/mis-monitoring-usecases
{
  "template": {
    "settings": {
      "index": {
        "lifecycle": {
          "name": "mis-monitoring-common-policy",
          "rollover_alias": "mis-monitoring-usecases"
        },
        "default_pipeline": "mis-usecases-ingest-pipeline",
        "number_of_shards": "1",
        "number_of_replicas": "0"
      }
    },
    "mappings": {
      "properties": {
        
      }
    }
------------------------------------------------------------------------------------
PUT _index_template/mis-log
{
  "template": {
    "settings": {
      "index": {
        "lifecycle": {
          "name": "mis-monitoring-common-policy",
          "rollover_alias": "mis-log"
        },
        "default_pipeline": "mis-log-ingest-pipeline",
        "number_of_shards": "1",
        "number_of_replicas": "0"
      }
    },
    "mappings": {
      "properties": {
        
      }
    }

Filebeat.yml:

filebeat.inputs:
- type: log
  enabled: true
  paths:
  - /k/app/LOG_ROOT/MIS/**/*.log
  fields:
    index: mis-monitoring-usecases
  ignore_older: 1h
  include_lines: ['ranv.*\|']
  multiline.type: pattern
  multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z
  multiline.negate: true
  multiline.match: after
  scan_frequency: 30s
  harvester_limit: 100
  close_inactive: 30m
  close_removed: true
  clean_removed: true
  processors:
    - add_host_metadata: null
    - drop_fields:
        when:
          equals:
            agent.type: filebeat
        fields:
        - agent.hostname
        - agent.id
        - agent.type
        - agent.ephemeral_id
        - agent.version
        - log.offset
        - log.flags
        - input.type
        - ecs.version
        - host
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
- type: log
  enabled: true
  paths:
  - /l/app/LOG_ROOT/Demo/*.log
  fields:
    index: mis-log
  ignore_older: 1h
  include_lines:
  - ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z ?(.*)
  multiline.type: pattern
  multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z
  multiline.negate: true
  multiline.match: after
  scan_frequency: 30s
  harvester_limit: 100
  close_inactive: 30m
  close_removed: true
  clean_removed: true
  reload.enabled: false
  processors:
    - add_host_metadata: null
    - drop_fields:
        when:
          equals:
            agent.type: filebeat
        fields:
        - agent.hostname
        - agent.id
        - agent.type
        - agent.ephemeral_id
        - agent.version
        - log.offset
        - log.flags
        - input.type
        - ecs.version
        - host.os
        - host.id
        - host.mac
        - host.architecture
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
setup.kibana:
  host: http://abc:5601
output.elasticsearch:
  hosts:
   - http://abc:9200
  index: "%{[fields.index]}-%{+yyyy.MM.dd}-000001"
monitoring.enabled: true
monitoring.elasticsearch: null
setup.ilm.enabled: false
setup.template.enabled: false

Output required index:
mis-monitoring-usecases-2023-05-05-000001
mis-log-2023-05-05-000001

Kindly let me know how this can be done without LOGSTASH.

@stephenb would you be able to help or suggest something here?

What version are you on... for filebeat and elastic?

Are you trying to use data stream or indices?

Next

this should be

  index: "%{[fields.index]}

You need to write to the write alias... not a fully qualified index name...

Answer the questions above and I will take a look although I am pretty busy right now I

@PRASHANT_MEHTA
Also please refrain from directly mentioning people that is not best practice and especially if you have just posted a topic that is not forum best practice. There are many questions on the forum, yours is no more important than any other topic.

Hello @stephenb ,

Thanx for your quick response and time to look into this!!
Filebeat and elastic 7.9.1 version.I'm using indices(elastic index)

Noted:Will avoid direct mention,apologies at my end.

Many Thanx

You need to follow these directions

You will need to set up everything in with the templates and create an initial managed index for each

Look at method #2 here then generalized for your use cases with

index: "%{[fields.index]}-%{[agent.version]}"

You could also just set the index in each input see here:

filebeat.inputs:
- type: log
  enabled: true
  index: "mis-monitoring-usecases-{[agent.version]}"
...

- type: log
  enabled: true
  paths:
  - /l/app/LOG_ROOT/Demo/*.log
  index: "mis-logs-{[agent.version]}"

This is in the wrong place

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml

not sure why it is in the middle of your inputs ... if you use no modules you can take it out.
In fact not sure how that is working that should break things

You really need to get off 7.9.1 ... whatever the reason you are staying on it, does not outweigh the benefits of being on a newer version.

Hello,

Below is the new config I've,in index tempalte itself the rollover and ilm is already defined.
I'm still debugging why the data is not going to respective indices with below config.
I'm not sure if output.elasticsearch should come common for both log type or should be configured seperately for two diffrerent input types

I read your above comments and partially understood and implemented below filebeat.yml.
Within few months will migrate to latest elk version,timebeing using 7.9.1,as requested for paid version.

filebeat.inputs:
- type: log
  paths:
    - /l/app/mis-monitoring-usecases.log
  fields:
    log_type: mis-monitoring-usecases
  fields_under_root: true

- type: log
  paths:
    - /k/migrate/mis-log.log
  fields:
    log_type: mis-log
  fields_under_root: true

output.elasticsearch:
  hosts: ["localhost:9200"]
  index: "%{[fields.log_type]}"
  setup.ilm.enabled: true
  setup.ilm.check_exists: true
  setup.ilm.overwrite: false
  setup.ilm.pattern: '{now/d}-000001'
  setup.ilm.rollover_alias: "%{[fields.log_type]}"

Templates:

PUT _index_template/mis-monitoring-usecases
{
  "template": {
    "settings": {
      "index": {
        "lifecycle": {
          "name": "mis-monitoring-common-policy",
          "rollover_alias": "mis-monitoring-usecases"
        },
        "default_pipeline": "mis-usecases-ingest-pipeline",
        "number_of_shards": "1",
        "number_of_replicas": "0"
      }
    },
    "mappings": {
      "properties": {
        
      }
    }
------------------------------------------------------------------------------------
PUT _index_template/mis-log
{
  "template": {
    "settings": {
      "index": {
        "lifecycle": {
          "name": "mis-monitoring-common-policy",
          "rollover_alias": "mis-log"
        },
        "default_pipeline": "mis-log-ingest-pipeline",
        "number_of_shards": "1",
        "number_of_replicas": "0"
      }
    },
    "mappings": {
      "properties": {
        
      }
    }

Required indice name in kibana not coming,debugging on this:
mis-monitoring-usecases-2023-05-04-000001
mis-monitoring-usecases-2023-05-05-000002
mis-log-2023-05-04-000001
mis-log-2023-05-05-000002

Hi @PRASHANT_MEHTA

You did not look closely at the method 2 that I recommend, since you setup everything manually

Try

setup.ilm.enabled: false
setup.template.enabled: false

Also, did you create the bootstrap (initial managed ) indexes with the write alias, if not it will not work....

You cannot do what you want by just all filebeat configuration.

Hi @stephenb ,

I did tried to do what you've suggested:

1)With below the indices did get created but without proper format I needed,simply index name came up like mis-log,mis-monitoring-usecases. My intended index name follows rollover pattern i.e mis-monitoring-usecases-2023-05-05-000002
mis-log-2023-05-04-000001 ,The data didn't come though in them.

setup.ilm.enabled: false
setup.template.enabled: false

I'm mostly getting this error as well : if index name are changed then need to add steup.template.pattern and setup.template.name.Even after providing said pattern and name still same error.

2)I did read second suggestion and halted as I can't proceed with that,reason I'm implementing a solution based on installer that is every template,policy,all modules like ELK components filebeat,logstash,elastic....etc gets installed in respective server and does intended work.In this I can't do anything MANUAL as suggested in METHOD 2 i.e

Then important you have to create a a boot strap index this is the alias from the write alias to the concrete index...(manual).This cant be done at my end.

Now I'm not sure what exactly is happening and ways to resolve this.

My requirement is simple but should only be done through Filebeat:
1)Should have only 1 filebeat.yml with different log path.
2)All log path ex: 2 different log paths should write data to respective indices
3)My indices should follow the ILM and rollover policy that I've defined in Elatsic index templates and should follow ILM pattern also append with index name i.e (indexName-{now/d}-000001)
4)In index tempalte itself as for both log paths two diffrent templates,they have there own log ingest pipeline.

After trying several attempts I'm not sure why I'm not getting desired index with its data.
For now I've constraint to use logstash ,else it would have been easy to send data from file beat to logstash and using tags route to respective indices.For now cant do.

2)Alternatively I was thinking if I'm unable to achieve above,I will manage single template only and then send data of other template to this one only and using LOG INGEST PIPELINE if somehow I can create mutiple patterns.For single pattern I knw,but for parsing data coming from two diffrent log path,how should i give two diffrent patterns 'm not sure how to do it,neither documentation has anything on this.

How to use both below in single LOG INGEST PIPELINE:

"grok": [
         "field": "message" ,
"""%{TIMESTAMP_ISO8601:timestamp}-%{WORD:server_name}-%{DATA:perl_module}-%{DATA:req_id}-%{LOGLEVEL:log_level}-%{DATA:method_name}:%{NUMBER:line_number} - %{WORD:usecase.uniqueId}\|%{WORD:usecase.usecaseExecutionSummary.runningHost}\|%{WORD:usecase.usecaseExecutionSummary.useCaseName}}"""
        ]
		
"grok": {
        "field": "message",
        "patterns": [
          "%{TIMESTAMP_ISO8601:@timestamp:date}-%{WORD:host.name}-%{WORD:app.name}-%{WORD:request.id}-%{WORD:log.level}( *)-%{GREEDYDATA:log.logger}:%{NUMBER:log.origin.file.line:long} - (?<statement>(.|\n|\n)*)"
        ]

To process single log path input everything works great, but when appended two log input types nothing works:

WORKING filebeat.yml for single log path:

filebeat.inputs:
- type: log
  enabled: true
  paths:
  - /l/map/ROOT/MIS/**/*.log
  ignore_older: 1h
  include_lines: ['UsecaseMonitoring.*\|']
  multiline.type: pattern
  multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z
  multiline.negate: true
  multiline.match: after
  scan_frequency: 30s
  harvester_limit: 100
  close_inactive: 30m
  close_removed: true
  clean_removed: true
setup.ilm.enabled: true
setup.ilm.check_exists: true
setup.ilm.rollover_alias: mis-monitoring-usecases
setup.ilm.pattern: '{now/d}-000001'
setup.ilm.overwrite: false
setup.kibana:
  host: 127.0.0.1:5601
output.elasticsearch:
  hosts:
  - 127.0.0.1:9200
processors:
- add_host_metadata: null
- drop_fields:
    when:
      equals:
        agent.type: filebeat
    fields:
    - agent.hostname
    - agent.id
    - agent.type
    - agent.ephemeral_id
    - agent.version
    - log.offset
    - log.flags
    - input.type
    - ecs.version
    - host
monitoring.enabled: true
monitoring.elasticsearch: null

Non working filebeat.yml for two log input paths,here not sure how to give rollover index and ilm pattern that is needed.

filebeat.inputs:
- type: log
  enabled: true
  paths:
  - /l/app/LOG_ROOT/mis/**/*.log
  fields:
    index: mis-monitoring-usecases
  ignore_older: 1h
  include_lines: ['UsecaseMonitoring*\|']
  multiline.type: pattern
  multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z
  multiline.negate: true
  multiline.match: after
  scan_frequency: 30s
  harvester_limit: 100
  close_inactive: 30m
  close_removed: true
  clean_removed: true
  processors:
    - add_host_metadata: null
    - drop_fields:
        when:
          equals:
            agent.type: filebeat
        fields:
        - agent.hostname
        - agent.id
        - agent.type
        - agent.ephemeral_id
        - agent.version
        - log.offset
        - log.flags
        - input.type
        - ecs.version
        - host
- type: log
  enabled: true
  paths:
  - /l/tpp/G_ROOT/memo/*.log
  fields:
    index: mis-log
  ignore_older: 1h
  include_lines:
  - ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z ?(.*)
  multiline.type: pattern
  multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z
  multiline.negate: true
  multiline.match: after
  scan_frequency: 30s
  harvester_limit: 100
  close_inactive: 30m
  close_removed: true
  clean_removed: true
  reload.enabled: false
  processors:
    - add_host_metadata: null
    - drop_fields:
        when:
          equals:
            agent.type: filebeat
        fields:
        - agent.hostname
        - agent.id
        - agent.type
        - agent.ephemeral_id
        - agent.version
        - log.offset
        - log.flags
        - input.type
        - ecs.version
        - host.os
        - host.id
        - host.mac
        - host.architecture
setup.kibana:
  host: 127.0.0.1:5601
output.elasticsearch:
  hosts:
   - 127.0.0.1:9200
  index: "%{[fields.index]}"
setup.ilm.enabled: false
setup.template.enabled: false
monitoring.enabled: true
monitoring.elasticsearch: null

I'll show you next week when I get back...

If you don't hear from me by Tuesday DM.

Again, there is no way to make it do all of this just with filebeat config.

Certainly don't need logstash.

It should basically be the docs plus what I showed you.

Perhaps I missing something ...

Hello ,

Again many thanx for your time to look into this , Meanwhile I'll dig more into this with documentation ,although tried many possible ways.

Thanx

Try with one hardcoded name first.

Create ILM Policy with write alias
Create template/ mapping name
Create initial managed index
Set The 2 settings in filebeat yml I showed you
Output index same a write alias.
Start filebeat...
Should add docs to initial managed index.

try to force rollover to test

POST write-alias/_rollover

Plus just simplify everything to start get rid the of all the extra stuff multiline etc... Just bare minimum then add all that back...

@PRASHANT_MEHTA

Looks at this carefully this is minimal but does everything you ask.

This is the order you need to do this as well ...

BTW timeseries indices are of the pattern

my-index-yyyy.mm.dd-000001

# Create Common ILM Policy
PUT _ilm/policy/mis-monitoring-common-policy
{
  "policy": {
    "phases": {
      "hot": {
        "actions": {
          "rollover": {
            "max_age": "1d",
            "max_size": "50gb"
          },
          "set_priority": {
            "priority": 100
          }
        }
      },
      "delete": {
        "min_age": "7d",
        "actions": {
          "delete": {}
        }
      }
    }
  }
}

# Create template with correct, pattern, alias and ILM Policy
PUT _index_template/mis-log
{
  "index_patterns": ["mis-logs-*"],
  "template": {
    "settings": {
      "index": {
        "lifecycle": {
          "name": "mis-monitoring-common-policy",
          "rollover_alias": "mis-log"
        },
        "number_of_shards": "1",
        "number_of_replicas": "0"
      }
    }
  }
}

# Create template with correct, pattern, alias and ILM Policy
PUT _index_template/mis-monitoring-usecases
{
  "index_patterns": ["mis-monitoring-usecases-*"],
  "template": {
    "settings": {
      "index": {
        "lifecycle": {
          "name": "mis-monitoring-common-policy",
          "rollover_alias": "mis-monitoring-usecases"
        },
        "number_of_shards": "1",
        "number_of_replicas": "0"
      }
    }
  }
}

# Create Initial Managed index 
PUT mis-logs-2023.05.06-000001
{
  "aliases": {
    "mis-logs":{
      "is_write_index": true 
    }
  }
}

# Create Initial Managed index 
PUT mis-monitoring-usecases-2023.05.06-000001
{
  "aliases": {
    "mis-monitoring-usecases":{
      "is_write_index": true 
    }
  }
}

My complete working filebeat.yml this is not a snippet

filebeat.inputs:

- type: log
  enabled: true
  paths:
    - /Users/sbrown/workspace/sample-data/discuss/multiple-logs/logs/*.log
  index: mis-monitoring-usecases

- type: log
  enabled: true
  paths:
    - /Users/sbrown/workspace/sample-data/discuss/multiple-logs/otherlogs/*.log
  index: mis-logs

# ======================= Elasticsearch template setting =======================

setup.ilm.enabled: false
setup.template.enabled: false

setup.kibana:
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

./filebeat setup -e

filebeat -e

Results

GET _cat/indices/mis-*/?v
health status index                                     uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   mis-monitoring-usecases-2023.05.06-000001 v6zg9FSwRSK8B6LzzOg8Pw   1   0         11            0     16.6kb         16.6kb
green  open   mis-logs-2023.05.06-000001                Gx8CtLOERs-zjQXElVaCLA   1   0         11            0     16.7kb         16.7kb
GET mis-logs-2023.05.06-000001/_ilm/explain
{
  "indices" : {
    "mis-logs-2023.05.06-000001" : {
      "index" : "mis-logs-2023.05.06-000001",
      "managed" : true,
      "policy" : "mis-monitoring-common-policy",
      "lifecycle_date_millis" : 1683418881220,
      "age" : "1.05m",
      "phase" : "hot",
      "phase_time_millis" : 1683418881304,
      "action" : "unfollow",
      "action_time_millis" : 1683418881435,
      "step" : "wait-for-follow-shard-tasks",
      "step_time_millis" : 1683418881522,
      "phase_execution" : {
        "policy" : "mis-monitoring-common-policy",
        "phase_definition" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "50gb",
              "max_age" : "1d"
            },
            "set_priority" : {
              "priority" : 100
            }
          }
        },
        "version" : 2,
        "modified_date_in_millis" : 1683418881161
      }
    }
  }
}

Test rollover it works...

POST mis-logs/_rollover
{
  "acknowledged" : true,
  "shards_acknowledged" : true,
  "old_index" : "mis-logs-2023.05.06-000001",
  "new_index" : "mis-logs-2023.05.06-000002",
  "rolled_over" : true,
  "dry_run" : false,
  "conditions" : { }
}

Hello @stephenb ,

Thanx for your time and additional efforts to explain with each and every config.
I now understand the significance of # Create Initial Managed index .This was confusing and without explanation would have not got it.

I still have a challenge with # Create Initial Managed index this approach.My implementation
is installer based i.e(By click next->next the end user will be able to install all ELK based applications/modules with respective configs/templates/policy etc).There would be no manual thing done.

So now I want to understand alternative to Create Initial Managed index(This is manual).With every installation no one will do this manual step of index creation. i.e the reason I was earlier using below config lines in filebeat.yml(but this worked only for one log type and didn't worked for two log type combined,as both rollover alias are different name)

setup.ilm.enabled: true
setup.ilm.check_exists: true
setup.ilm.rollover_alias: mis-monitoring-usecases
setup.ilm.pattern: '{now/d}-000001'
setup.ilm.overwrite: false

Plz suggest something on this.Do u think above config lines can/or should be added to both log type seperately in config and would work?(I do understand how you've managed to create index name as required by me using initial managed index.)

Is below valid as I don't see output now.

filebeat.inputs:

- type: log
  enabled: true
  paths:
    - /Users/sbrown/workspace/sample-data/discuss/multiple-logs/logs/*.log
  index: mis-monitoring-usecases
  setup.ilm.enabled: true
  setup.ilm.check_exists: true
  setup.ilm.rollover_alias: mis-monitoring-usecases
  setup.ilm.pattern: '{now/d}-000001'
  setup.ilm.overwrite: false

- type: log
  enabled: true
  paths:
    - /Users/sbrown/workspace/sample-data/discuss/multiple-logs/otherlogs/*.log
  index: mis-logs
  setup.ilm.enabled: true
  setup.ilm.check_exists: true
  setup.ilm.rollover_alias: mis-log
  setup.ilm.pattern: '{now/d}-000001'
  setup.ilm.overwrite: false

I've tested with ur approach and everything worked fine(for both templates I've log ingest pipeline seperate ,that to worked,only challenge to avoid manual creation of index(initial managed index)

Many Thanx.

Hello this is how the solution is for any future reference.

Step 1)First commit your index templates:to get data in respective indices
step 2)If log ingest pipeline is there to process log patterns through GROK,commit them.
Step 3)Now this is IMPORTANT step: Define your initial write indices,this is first time process and this would be the reference for future rollover indices.

ex:
PUT tis-log-2023.05.29-000001
{
"aliases": {
"tis-log":{
"is_write_index": true
}
}
}

PUT tis-monitoring-usecases-2023.05.29-000001
{
"aliases": {
"tis-monitoring-usecases":{
"is_write_index": true
}
}
}

step 4: Filebeat.yml

filebeat.inputs:

- type: log
  enabled: true
  paths:
  - ${MY_LOGS}/tis/**/*.log
  index: tis-monitoring-usecases
  ignore_older: 1h
  include_lines: ['UsecaseMonitoring::write.*\|']
  scan_frequency: 30s
  harvester_limit: 100
  close_inactive: 30m
  close_removed: true
  clean_removed: true
  reload.enabled: false
  
- type: log
  enabled: true
  paths:
  - ${MY_LOGS}/tis/*/*.log
  index: tis-log
  ignore_older: 1h
  include_lines:
  - ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z ?(.*)
  multiline.type: pattern
  multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z
  multiline.negate: true
  multiline.match: after
  scan_frequency: 30s
  harvester_limit: 100
  close_inactive: 30m
  close_removed: true
  clean_removed: true
  reload.enabled: false
  
# ======================= Elasticsearch template setting =======================

setup.ilm.enabled: false
setup.template.enabled: false

setup.kibana:
  host: ["http://abc:5601"]
# ======================= Elasticsearch Output =======================
processors:
  #- add_host_metadata: null
  - drop_fields:
      when:
        equals:
          agent.type: filebeat
      fields:
        - agent.hostname
        - agent.id
        - agent.type
        - agent.ephemeral_id
        - agent.version
        - log.offset
        - log.flags
        - input.type
        - ecs.version
        - host.os
        - host.id
        - host.mac
        - host.architecture
        - agent.name
        - filebeathost.name
output.elasticsearch:
  hosts: ["http://abc:9200"]

Now one can define N number of log paths and send data to respective indices from single filebeat.yml

Why this?
setup.ilm.enabled: false---------------to pick your ilm policy defined,filebeat creates its own policy by default else
setup.template.enabled: false--------to match with your template,else filebeat creates its own default template.

Hope this helps.

1 Like

Hello @stephenb,

Just one last issue ,I'm getting rollover indices with same dates, needed current day date with every rollover:

tis-log-2023.05.29-000003
tis-log-2023.05.29-000002
tis-log-2023.05.29-000001

tis-monitoring-usecases-2023.05.29-000003
tis-monitoring-usecases-2023.05.29-000002
tis-monitoring-usecases2023.05.29-000001

Not sure where its causing this.......Now same filebeat.yml is present in 3 servers and sending data to indices.

Show your ILM policy...
If the index needs to roll over more than once a day, that's what you'll get

I've crosschecked the rollover policy and I see the data is less than 1GB and by this each day new rollover should be created with current date rollover index name.

PUT _ilm/policy/tis-monitoring-common-policy
{
  "policy": {
    "phases": {
      "hot": {
        "min_age": "0ms",
        "actions": {
          "rollover": {
            "max_age": "1d",
            "max_size": "5gb"
          },
          "set_priority": {
            "priority": 100
          }
        }
      },
      "delete": {
        "min_age": "4d",
        "actions": {
          "delete": {
            "delete_searchable_snapshot": true
          }
        }
      }
    }
  }
}

Show your
GET _cat/indices/tls-*/?v

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.