Logstash output index names are coming out incorrectly: %{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}

ELK stack was on version 6.3 prior to upgrading to 6.8 and then 7.2. Just to preface, I'm not very good with using ELK (I took over a previous sysadmin who set this up), so I can't tell whether there's anything wrong with the configurations. :frowning:

Ever since upgrading from 6.8 to 7.2, logstash logs stopped indexing to the name format "logstash-2018.xx.xx", but instead come out like the below screenshot:

When I tried to re-index the above with the correct name (logstash-2019.09.04 for e.g.), I can view the correct logs.

I have also tried changing the index => to "logstash-%{+YYYY.MM.dd}", but for some reason it doesn't index anything at all. When I revert it back to the above index name, it succeeds at outputting logs to elasticsearch, but I can't view anything due to the incorrect name format:

This is the logstash configurations for input/filter/output:

input {
  syslog {
    port => 514
    tags => ["syslog"]
filter {
  if "syslog" in [tags] and [logsource] =~ "hostname" {
        mutate { gsub => ["message", "\r\n", " "] }
	mutate { gsub => ["message", "The following activities have occurred: Op: ", " "] }
	split { terminator => "Op: " }
    grok {
        match => { "message" => "%{DATA:File_Operation}: %{DATA:File_Name} User: %{DATA:User_name} %{GREEDYDATA:syslog_message}" }
  }else if "syslog" in [tags]{
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]

output {
  if "syslog" in [tags]{
  elasticsearch {
    hosts => "hostname:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"

I checked logstash-plain.log after restarting logstash, dont think there's anything useful?:

^N[2019-09-04T14:12:45,179][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-09-04T14:13:12,331][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-09-04T14:13:12,345][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elk.hostname:9200"]}
[2019-09-04T14:13:12,363][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-09-04T14:13:12,363][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elk.hostname:9200/]}}
[2019-09-04T14:13:12,370][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elk.hostname:9200/"}
[2019-09-04T14:13:12,377][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-09-04T14:13:12,377][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-09-04T14:13:12,396][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elk.hostname:9200"]}
[2019-09-04T14:13:12,644][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2019-09-04T14:13:13,702][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2019-09-04T14:13:13,708][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>6, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>750, :thread=>"#<Thread:0x19992f43 run>"}
[2019-09-04T14:13:13,936][INFO ][logstash.inputs.lumberjack] Starting lumberjack input listener {:address=>""}
[2019-09-04T14:13:15,593][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>""}
[2019-09-04T14:13:15,746][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>""}
[2019-09-04T14:13:15,780][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}
[2019-09-04T14:13:16,077][INFO ][logstash.inputs.syslog   ] Starting syslog udp listener {:address=>""}
[2019-09-04T14:13:16,110][INFO ][logstash.inputs.udp      ] Starting UDP listener {:address=>""}
[2019-09-04T14:13:16,141][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2019-09-04T14:13:16,143][INFO ][org.logstash.beats.Server] Starting server on port: 5045
[2019-09-04T14:13:16,197][INFO ][logstash.inputs.syslog   ] Starting syslog tcp listener {:address=>""}
[2019-09-04T14:13:16,282][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-09-04T14:13:16,315][INFO ][logstash.inputs.udp      ] UDP listener started {:address=>"", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[2019-09-04T14:13:16,755][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

For ES logs I dont find any errors related to logstash. If i re-index the incorrectly named logs, I can read it, but I rather it automatically work as intended.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.