Elasticsearch curator problem

Hi,

I use curator to delete older log. It's working for all index but that index name not working.

%{[@metadata][beat]}-

---
# Remember, leave a key empty if there is no value.  None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True.  If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
  1:
    action: delete_indices
    description: >-
      Delete indices older than 30 days (based on index name), for logstash-
      prefixed indices. Ignore the error if the filter does not result in an
      actionable list of indices (ignore_empty_list) and exit cleanly.
    options:
      ignore_empty_list: True
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: %{[@metadata][beat]}-
    - filtertype: age
      source: name
      direction: older
      timestring: '%Y.%m.%d'
      unit: days
      unit_count: 30

I got that error message.

Unable to parse YAML file. Error: while scanning for the next token
found character '%' that cannot start any token
  in "<unicode string>", line 21, column 14:
          value: %{[@metadata][beat]}-
                 ^

That pattern is used in Logstash and is based on the data of the events being processed. As you do not have that type of information available here, you will need to create a pattern that matches the indices Logstash has actually created based on this pattern, and this will depend on the Beats you are using.

That is my output configuration for beat.

output {
  elasticsearch {
    hosts => "myelasticsearch.domain:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

I don't understand why I see this index name in Elasticsearch. I also see filebeat and metricbeat.

health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   network-syslog-2017.09.13       HWsbC7HxQv2YItKWwzk6Hw   5   1     858796            0    783.9mb        391.9mb
green  open   %{[@metadata][beat]}-2017.07.14 6cyvNYrOR1-r_1dsJ7nVhg   5   1      26644            0     38.3mb         19.1mb
green  open   %{[@metadata][beat]}-2017.07.07 djaxcwFcTjCllf5izNLdGA   5   1      26412            0     37.8mb         18.9mb
green  open   %{[@metadata][beat]}-2017.09.21 5HLdaIADSTeWCBe0iSkCbQ   5   1    5060096            0     12.6gb          6.3gb
green  open   logstash-2017.09.21             CJ2WzauUTwS0YOg75akp7A   5   1    2401142            0     10.6gb          5.3gb
green  open   filebeat-2017.09.04             yDUW7T4JT7qsdoaMQpDFfg   5   1     455902            0        1gb        528.2mb
green  open   %{[@metadata][beat]}-2017.07.05 YzL7ARWCRXCCjZWVTnmb4w   5   1      26469            0     38.2mb         19.1mb
green  open   %{[@metadata][beat]}-2017.08.09 nuls_138ToWKnqDBbg4Pyg   5   1      80289            0     89.2mb         44.6mb

That means that some of your data may not come from Beats and therefore does not have the %{[@metadata][beat]} field set in the data. That is not a very good index name, and you should look to correct this in your Logstash config.

How can I make sure beat packet are send to the right output ?

I try this but I think it's not working.

output {
  if [type] == "beat" {
    elasticsearch {
      hosts => "myelasticsearch.domain:9200"
      manage_template => false
      index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
      document_type => "%{[@metadata][type]}"
    }
  }
}

Look at the data in those indices and see if you can identify where they are coming from. You could also populate this field with a default value in the filter section if it is not already set. I have provided an example below (not tested):

if ![@metadata][beat] {
  mutate {
    add_field => { "[@metadata][beat]" => "default" }
  }
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.