Filebeat read different log paths and write to different elastic index with their own ilm policy,How?

Hello All,
I'm having a bit of a hard time understanding the best config for our setup. We are running filebeat to ship several logs from different file location to elastic that need their own index template and policy.
Currently in filebeat.yml configuration it rights correctly to first index i.e tcs-log-(index)---Path1=/l/app/tcs/....
Now I've other requirement where in now path to logs is different and Now I need this data to be written to sepearte index i.e tcs-package-logs-
,with its different ilm policy and rollover.
I'm unable to understand how Can I manage to tell filebaet.yml to write different log data from diffrent location to seperate indexs with their own policy.
Any suggestion would be helpful.
Note:Let me know how this can be done without using logstash.
If using logstash then how?,Ideally dont want to use logstash.

FOR NOW:
write to tcs-log works correctly,Now for second path how to configure to tell to write to tcs-package with same policy that of tcs-log

filebeat.inputs:
- type: log
  enabled: true
  paths:
  - /J/logs/TCS/**/*.log           **#WRITE THIS DATA TO INDEX tcs-log-***
  - /K/package_logs/TCS/**/*.log   **#WRITE THIS DATA TO INDEX tcs-package-***
  ignore_older: 1h
  include_lines:
  - ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z ?(.*)
  multiline.type: pattern
  multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z
  multiline.negate: true
  multiline.match: after
  scan_frequency: 30s
  harvester_limit: 100
  close_inactive: 30m
  close_removed: true
  clean_removed: true
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.ilm.enabled: true
setup.ilm.check_exists: true
setup.ilm.rollover_alias: tcs-log
setup.ilm.pattern: '{now/d}-000001'
setup.ilm.overwrite: false
setup.kibana:
  host: http://abc:5601
output.elasticsearch:
  hosts:
  - http://abc:9200
processors:
- add_host_metadata: null
- drop_fields:
    when:
      equals:
        agent.type: filebeat
    fields:
    - agent.hostname
    - agent.id
    - agent.type
    - agent.ephemeral_id
    - agent.version
    - log.offset
    - log.flags
    - input.type
    - ecs.version
    - host.os
    - host.id
    - host.mac
    - host.architecture
monitoring.enabled: true
monitoring.elasticsearch: null

Kindly suggest how it can be done.

Thanx

Hi,

I'm still not sure how I can route data to two different index using single filebeat.yml.
I read the documentation I see challenge due to ILM i have defined in my config.
Below image I m not sure How I can achieve as both index have their own rollover and ILM policy.


filebeat 7.9.1

Thanx

You can use conditionals in the output to direct the data to different index based on a string present on the message or the value present in a custom field.

In your case it would be better to use two different inputs, each one with one path, instead of using the two paths in one input.

But I don't think you will be able to set two different ILM policies direct on filebeat as this is not supported.

What you need to do is to create an index template for each one of your indices and set the ILM policy in the template.

Hello @leandrojmp ,

Thanx for your time to look into this.I've already set the template with policy and rollover.
If I understood you correctly you meant two filebeat.yml in different paths location? , If yes then where I can configure these path location to look for filebeat so it can pick each filebeat.yml and
execute.
I hope with single filebeat both the execution is done in parallel.
ILM POLICY IS SAME,ROLLOVER IS THE INDEX NAME
2)upsert functionality is not available in filebeat like logstash?

Summarize:
Two different log path having there own logs,both having there own ilm and rollover policy and would require to put the data in their own index.

Thanx

Hello,

Why is below config not valid,I read many post here and tried something like this,already i have defined my policy with rollover and its used in their respective index templates with log ingest pipeline.
I get this error always while executing the filebeat.yml.

filebeat.inputs:
- type: log
  enabled: true
  paths:
  - /k/app/LOG_ROOT/MIS/**/*.log
  fields:
    index: mis-monitoring-usecases
  ignore_older: 1h
  include_lines: ['ranv.*\|']
  multiline.type: pattern
  multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z
  multiline.negate: true
  multiline.match: after
  scan_frequency: 30s
  harvester_limit: 100
  close_inactive: 30m
  close_removed: true
  clean_removed: true
  processors:
    - add_host_metadata: null
    - drop_fields:
        when:
          equals:
            agent.type: filebeat
        fields:
        - agent.hostname
        - agent.id
        - agent.type
        - agent.ephemeral_id
        - agent.version
        - log.offset
        - log.flags
        - input.type
        - ecs.version
        - host
- type: log
  enabled: true
  paths:
  - /l/app/LOG_ROOT/Demo/*.log
  fields:
    index: cis-log
  ignore_older: 1h
  include_lines:
  - ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z ?(.*)
  multiline.type: pattern
  multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z
  multiline.negate: true
  multiline.match: after
  scan_frequency: 30s
  harvester_limit: 100
  close_inactive: 30m
  close_removed: true
  clean_removed: true
  reload.enabled: false
  processors:
    - add_host_metadata: null
    - drop_fields:
        when:
          equals:
            agent.type: filebeat
        fields:
        - agent.hostname
        - agent.id
        - agent.type
        - agent.ephemeral_id
        - agent.version
        - log.offset
        - log.flags
        - input.type
        - ecs.version
        - host.os
        - host.id
        - host.mac
        - host.architecture
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
setup.kibana:
  host: http://abc:5601
output.elasticsearch:
  hosts:
   - http://abc:9200
  index: "%{[fields.index]}-%{+yyyy.MM.dd}-000001"
  setup.ilm.enabled: true
  setup.ilm.check_exists: true
  setup.ilm.rollover_alias: "%{[fields.index]}"
  setup.ilm.pattern: '{now/d}-000001'
  setup.ilm.overwrite: false
  setup.template.overwrite: false
  setup.template.name: "%{[fields.index]}-index-template"
  setup.template.pattern: "%{[fields.index]}-*"
monitoring.enabled: true
monitoring.elasticsearch: null

Exiting: setup.template.name and setup.template.pattern have to be set if index name is modified

Thanx

Probably because of this same problem.

The setup.* configuration needs to be in the root of the configuration, not under the output.elasticsearch.

Also, if you already have the template in Elasticsearch you do not even need these configurations, you just need to have a template that matches your index in elasticsearch and you can remove the setup.template.* and setup.ilm.*.

Since you have two different indices and want to use different templates you cannot use those setup.template.* and setup.ilm.* configurations in filebeat, you need to manage everything in the template on elasticsearch.

Now I've removed the setup.tempalte* and setup.ilm*.
I've added below config in both type,so that I can use in output for sending to respective index:

fields:
    index: mis-monitoring-usecases

In Index template and rollover and index pattern I'm using index name only i.e all of them have same name.

After doing above changes also I get same errror:

Exiting: setup.template.name and setup.template.pattern have to be set if index name is modified

Please, share your current entire filebeat.yml with the changes you made.

Below is updated file:

filebeat.inputs:
- type: log
  enabled: true
  paths:
  - /k/app/LOG_ROOT/MIS/**/*.log
  fields:
    index: mis-monitoring-usecases
  ignore_older: 1h
  include_lines: ['ranv.*\|']
  multiline.type: pattern
  multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z
  multiline.negate: true
  multiline.match: after
  scan_frequency: 30s
  harvester_limit: 100
  close_inactive: 30m
  close_removed: true
  clean_removed: true
  processors:
    - add_host_metadata: null
    - drop_fields:
        when:
          equals:
            agent.type: filebeat
        fields:
        - agent.hostname
        - agent.id
        - agent.type
        - agent.ephemeral_id
        - agent.version
        - log.offset
        - log.flags
        - input.type
        - ecs.version
        - host
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
- type: log
  enabled: true
  paths:
  - /l/app/LOG_ROOT/Demo/*.log
  fields:
    index: cis-log
  ignore_older: 1h
  include_lines:
  - ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z ?(.*)
  multiline.type: pattern
  multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z
  multiline.negate: true
  multiline.match: after
  scan_frequency: 30s
  harvester_limit: 100
  close_inactive: 30m
  close_removed: true
  clean_removed: true
  reload.enabled: false
  processors:
    - add_host_metadata: null
    - drop_fields:
        when:
          equals:
            agent.type: filebeat
        fields:
        - agent.hostname
        - agent.id
        - agent.type
        - agent.ephemeral_id
        - agent.version
        - log.offset
        - log.flags
        - input.type
        - ecs.version
        - host.os
        - host.id
        - host.mac
        - host.architecture
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
setup.kibana:
  host: http://abc:5601
output.elasticsearch:
  hosts:
   - http://abc:9200
  index: "%{[fields.index]}-%{+yyyy.MM.dd}-000001"
monitoring.enabled: true
monitoring.elasticsearch: null

I think you will need to put those two lines in the root of your filebeat.yml.

setup.ilm.enabled: false
setup.template.enabled: false

Ok,After applying those in root level,now filebeat starts.
I still have confusion.As all my index are maintained by ILM policy and has there own template.
By mentioning those two line,will it not cause to unfollow my policy i.e pre-check validation I was doing earlier.
Intention would be it should follow respective rollover and ILM policy.

INDEX TEMPLATE:

PUT _index_template/mis-monitoring-usecases
{
  "template": {
    "settings": {
      "index": {
        "lifecycle": {
          "name": "mis-monitoring-common-policy",
          "rollover_alias": "cis-monitoring-usecases"
        },
        "default_pipeline": "mis-usecases-ingest-pipeline",
        "number_of_shards": "1",
        "number_of_replicas": "0"
      }
    },
    "mappings": {
      "properties": {
        
      }
    }
  },
  "index_patterns": [
    "mis-monitoring-usecases-*"
  ],
  "composed_of": []
}

mis-monitoring-common-policy:is applied to all index(common to all index),but rollover are
differenti.e rollover is same as index name

Current config:

filebeat.inputs:
- type: log
  enabled: true
  paths:
  - /k/app/LOG_ROOT/MIS/**/*.log
  fields:
    index: mis-monitoring-usecases
  ignore_older: 1h
  include_lines: ['ranv.*\|']
  multiline.type: pattern
  multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z
  multiline.negate: true
  multiline.match: after
  scan_frequency: 30s
  harvester_limit: 100
  close_inactive: 30m
  close_removed: true
  clean_removed: true
  processors:
    - add_host_metadata: null
    - drop_fields:
        when:
          equals:
            agent.type: filebeat
        fields:
        - agent.hostname
        - agent.id
        - agent.type
        - agent.ephemeral_id
        - agent.version
        - log.offset
        - log.flags
        - input.type
        - ecs.version
        - host
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
- type: log
  enabled: true
  paths:
  - /l/app/LOG_ROOT/Demo/*.log
  fields:
    index: cis-log
  ignore_older: 1h
  include_lines:
  - ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z ?(.*)
  multiline.type: pattern
  multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z
  multiline.negate: true
  multiline.match: after
  scan_frequency: 30s
  harvester_limit: 100
  close_inactive: 30m
  close_removed: true
  clean_removed: true
  reload.enabled: false
  processors:
    - add_host_metadata: null
    - drop_fields:
        when:
          equals:
            agent.type: filebeat
        fields:
        - agent.hostname
        - agent.id
        - agent.type
        - agent.ephemeral_id
        - agent.version
        - log.offset
        - log.flags
        - input.type
        - ecs.version
        - host.os
        - host.id
        - host.mac
        - host.architecture
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
setup.kibana:
  host: http://abc:5601
output.elasticsearch:
  hosts:
   - http://abc:9200
  index: "%{[fields.index]}-%{+yyyy.MM.dd}-000001"
monitoring.enabled: true
monitoring.elasticsearch: null
setup.ilm.enabled: false
setup.template.enabled: false

The ILM policies and index templates are stored in Elasticsearch, if you have a template where the index pattern matchs your index name and this template also have a lifecycle configured, it will work.

Ok understood,then it would be fine if I remove this alltogether,it should work?

setup.ilm.enabled: false
setup.template.enabled: false

No, if you remove it will not work because the default value for both is true.

Elastic assumes that you will use filebeat to send logs to the default index or data stream, which is not what you want.

Hello,

The below config works fine of filebeat and now two indexes are created at kibana.But
current challenge is from filebeat I'm processing logs line and writing to mis-monitoring-usecases index and from logstash also I'm running one perl file and would write to same index.
i.e I want filebeat and logstash processed data to go on same current rollover index,this is not happening.How can this be done ?

There is some confusion why the data is not going to same index,instead diffrent index are created and rollover also not pointing for logstash created index.

Filebeat:

filebeat.inputs:
- type: log
  enabled: true
  paths:
  - /k/app/LOG_ROOT/MIS/**/*.log
  fields:
    index: mis-monitoring-usecases
  ignore_older: 1h
  include_lines: ['ranv.*\|']
  multiline.type: pattern
  multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z
  multiline.negate: true
  multiline.match: after
  scan_frequency: 30s
  harvester_limit: 100
  close_inactive: 30m
  close_removed: true
  clean_removed: true
  processors:
    - add_host_metadata: null
    - drop_fields:
        when:
          equals:
            agent.type: filebeat
        fields:
        - agent.hostname
        - agent.id
        - agent.type
        - agent.ephemeral_id
        - agent.version
        - log.offset
        - log.flags
        - input.type
        - ecs.version
        - host
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
- type: log
  enabled: true
  paths:
  - /l/app/LOG_ROOT/Demo/*.log
  fields:
    index: cis-log
  ignore_older: 1h
  include_lines:
  - ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z ?(.*)
  multiline.type: pattern
  multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z
  multiline.negate: true
  multiline.match: after
  scan_frequency: 30s
  harvester_limit: 100
  close_inactive: 30m
  close_removed: true
  clean_removed: true
  reload.enabled: false
  processors:
    - add_host_metadata: null
    - drop_fields:
        when:
          equals:
            agent.type: filebeat
        fields:
        - agent.hostname
        - agent.id
        - agent.type
        - agent.ephemeral_id
        - agent.version
        - log.offset
        - log.flags
        - input.type
        - ecs.version
        - host.os
        - host.id
        - host.mac
        - host.architecture
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
setup.kibana:
  host: http://abc:5601
output.elasticsearch:
  hosts:
   - http://abc:9200
  index: "%{[fields.index]}-%{+yyyy.MM.dd}-000001"
monitoring.enabled: true
monitoring.elasticsearch: null
setup.ilm.enabled: false
setup.template.enabled: false

LOGSTASH:

input {
   exec {
      command => '$M_APP/mis/monitoring/scripts/monitoring_core.ksh -c $DM_CONFIG_DIR/mis/configuration.properties -e tc_mring_usecases.pl'
      schedule => "* */2 * * * *"
   }
}

filter {
   if [message] =~ "^\{.*\}[\s\S]*$" {
      json {
         source => "message"
         target => "parsed_json"
	 remove_field => "message"
      }
	  
      split {
         field => "[parsed_json][mis]"
         target => "usecase"
         remove_field => [ "parsed_json" ]
      }
	
   }
   else {
     drop { }
   }
}

output {
   elasticsearch {
		hosts => "http://abc:9200"
	      ilm_pattern => "{now/d}-000001"
      	  ilm_rollover_alias => "mis-monitoring-usecases"
	      ilm_policy => "mis-monitoring-common-policy"  
		  doc_as_upsert => true
		  document_id => "%{[usecase][uniqueId]}"		  
	} 
}



You need to share some context.

To which index is the data from Filebeat going? And to which index is the data from Logstash going?

Do you have any error or warn log in Logstash?

I do not use rollover, but it seems that you have some misconfiguration here.

In your template you have this:

      "index": {
        "lifecycle": {
          "name": "mis-monitoring-common-policy",
          "rollover_alias": "cis-monitoring-usecases"
        }

But in your logstash output you have this:

ilm_rollover_alias => "mis-monitoring-usecases"

If I'm not wrong those should be the same.

Hello,

Apologies for not putting up this correctly.
I will try to explain the problem that I'm facing and plz guide how it can be best resolved.

1)I've index template that already has ILM policy and rollover already defined.
2)Source of data:
1)In filebeat.yml I've configured two log paths,both of them write to there respective indices.
i.e mis-log index and mis-monitoring-usecases index
3)Issue comes here: Now logstash also parses data from 1 backend script and write to mis-monitoring-usecases index.Here the logstash conflicts and says its not pointing to rollover index define.

Log path configured in filebeat for mis-monitoring-usecases index and other data coming from logstash should always write to same current rollover index,ex mis-monitoring-usecases-2023.03.21-000004.

For mis-log index :The data is already writting with rollover to index(Server1) ex:
cis-log-2023.03.21-000006
cis-log-2023.03.20-000005

Now above shared filebeat.yml(server 2) has path for cis-log and this should point to current written rollover index i.e cis-log-2023.03.20-000006,Instead for this new index is getting created and data is written here cis-log-2023.03.21-000001 which is not required.

The data in above image-i.e 1 document in 000001 was expected to come in 000007 ,
mis-log-2023.03.21-000001 server 2 created,mis-log-2023.03.21-000001 server 1 created

I hope this clarify .Below I've shared filebeat.yml and logstash cfg:

ERROR WHEN ONLY FILEBEAT IS RUN:

illegal_argument_exception: index.lifecycle.rollover_alias [mis-monitoring-usecases] does not point to index [mis-monitoring-usecases-2023.03.21-000001


illegal_argument_exception: index.lifecycle.rollover_alias [mis-log] does not point to index [mis-log-2023.03.21-000001

FILEBEAT.yml

filebeat.inputs:
- type: log
  enabled: true
  paths:
  - /K/app/LOG_NEW/MIS/**/*.log
  fields:
    index: mis-monitoring-usecases
  ignore_older: 1h
  include_lines: ['DetailsInLog.*\|']
  multiline.type: pattern
  multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z
  multiline.negate: true
  multiline.match: after
  scan_frequency: 30s
  harvester_limit: 100
  close_inactive: 30m
  close_removed: true
  clean_removed: true
  processors:
    - add_host_metadata: null
    - drop_fields:
        when:
          equals:
            agent.type: filebeat
        fields:
        - agent.hostname
- type: log
  enabled: true
  paths:
  - /K/app/ROOT/Demo/*.log
  fields:
    index: mis-log
  ignore_older: 1h
  include_lines:
  - ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z ?(.*)
  multiline.type: pattern
  multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}Z
  multiline.negate: true
  multiline.match: after
  scan_frequency: 30s
  harvester_limit: 100
  close_inactive: 30m
  close_removed: true
  clean_removed: true
  reload.enabled: false
  processors:
    - add_host_metadata: null
    - drop_fields:
        when:
          equals:
            agent.type: filebeat
        fields:
        - agent.hostname
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
setup.kibana:
  host: http://abc:5601
output.elasticsearch:
  hosts:
   - http://abc:9200
  index: "%{[fields.index]}-%{+yyyy.MM.dd}-000001"
setup.ilm.enabled: false
setup.template.enabled: false
monitoring.enabled: true
monitoring.elasticsearch: null

LOGSTASH

input {
   exec {
      command => '$DM_APP/mis/monitoring/scripts/monitoring_core.ksh -c $DM_CONFIG_DIR/mis/configuration.properties -e mc_monitor_ucases.pl'
      schedule => "* */2 * * * *"
   }
}

filter {
   if [message] =~ "^\{.*\}[\s\S]*$" {
      json {
         source => "message"
         target => "parsed_json"
	 remove_field => "message"
      }
	  
      split {
         field => "[parsed_json][mis]"
         target => "usecase"
         remove_field => [ "parsed_json" ]
      }
	
   }
   else {
     drop { }
   }
}

output {
   elasticsearch {
		hosts => "http://abc:9200"
	      ilm_pattern => "{now/d}-000001"
      	  ilm_rollover_alias => "mis-monitoring-usecases"
	      ilm_policy => "mis-monitoring-common-policy"  
		  doc_as_upsert => true
		  document_id => "%{[usecase][uniqueId]}"		  
	} 
}

I'm sorry, but I could not understand, it is really confusing what you are trying to do.

Can you summarize?

Where is logstash writing and where it should write? Do you have any log errors in Logstash? Please share.

Where is filebeat writing and where it should write? Do you have any log errors in Filebeat?

From what I could understand it seems that your issue is in your filebeat configuration.

index: "%{[fields.index]}-%{+yyyy.MM.dd}-000001"

I do not use rollovers, but when you use rollovers you need to send the write requests to the rollover alias, not to a backing indice, you are sending it to a backing indice, you cannot do that, try to change it, I think you just need to use %{[fields.index]}.

Hello @leandrojmp ,

Mybad for the confusion.I'll try and explain.This is little difficult to explain.
I will go step by step:
Filebeat.yml and Index template you can refer from above.

What I'm trying to do:
1)Have single filebeat.yml
2)This filebeat.yml will be configured for different different logs path.Different log path will
write data to diffrent INDICES,which have different rollover but same ilm policy defined in template.
3)Now consider this filebeat.yml to be installed in 7 servers,Now if any of the log path is same in this 7 servers then data should always be written to current rollover index.

example:
1)filebeat.yml configured to write data to two indices i.e mis-log and mis-monitoring-usecases.For each of them ILM policy and rollover is defined in template.

Issue I'dont get the required indice pattern in kibana:
mis-log-2023.03.22-000001 and mis-monitoring-usecases-00001 with same yml.

Now consider if any server with filebeat.yml is configured for mis-log and mis-monitoring-usecases indices then it should write data to current rollover index.
There is scenario if new server if filebeat installed then it should not create new indices for already defined log path ,instead write to current rollover index.

With below neither the index with required rollover is getting created.

output.elasticsearch:
  hosts:
   - http://abc:9200
  index: "%{[fields.index]}-%{+yyyy.MM.dd}-000001"
setup.ilm.enabled: false
setup.template.enabled: false

If above is solved then I would later explain about Logstash part.
Hope this explains to some extent

Thanx

As I mentioned on previous post, if you are using rollover you should point to the rollover alias, you are pointing to a backing index.

You need to change this to the name of the rollover alias.