Unable to assign logstash aliases to index

Hi Guys,

my index lifecycle policies is giving below error,

Index lifecycle error
illegal_argument_exception: index.lifecycle.rollover_alias [logstash] does not point to index [logstash]

i tried to edit the index and add in "index.lifecycle.rollover_alias": "logstash", after i save.. the index alias still remain to "none"
anyway to assign the alias in index and take effect?

the logstash alias is not allowed to have the same name as the index.
so your index must have another name.
https://www.elastic.co/guide/en/elasticsearch/reference/master/indices-rollover-index.html

there you cann read that the index is called logstash-000001 and the alias is only logstash.

when the conditions met it will create a new one like logstash-000002 and add an alias logstash.

like the logrotate in linux.
read the elasticsearch output of logstash:
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-ilm

by the way, this is my logstash config file,
I not sure why the index file is not logstash-00000x anymore after i upgrade from 6.7 to 7.1,
if i want to change the index name from logstash to logstash-0000x, what should i do?

input {
  tcp {
    port => 5514
    type => syslog
  }
  udp {
    port => 5514
    type => syslog
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
  elasticsearch { hosts => ["10.3.3.30:9200"] }
  stdout { codec => rubydebug }
}

ah ok.
Is the logstash index like this: logstash-000001
or is there an index just called logstash?

Because you never set an ilm in logstash. so it is not a logstash error.

Logstash default output index is "logstash-%{+YYYY.MM.dd}"
If you activate ilm in elasticsearch output. It will use the ILM settings.

Maybe you just try it with a clean index.

Hi logger,

The logstash index is name logstash.
before i upgrade from 6.7 to 7.1, it was logstash-00000x
Not sure why after upgrade from 6.x to 7.x, it create a lot of issues. and index file become logstash instead of logstash-xxxxxx

and also all the logstash-xxxxx index from 6.7 require to reindexing.
anyway, i can fix this?

I don´t know why this happened.
you can use the reindex api https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html
to reindex or to rename the indices.

But have a look at kibana -> management -> upgrade assistant
there should be listed what is wrong with your indices and maybe a button to reindex it again.

Hi, i already reindex version 6.x indices.

do you know what is this warning?
Deprecation: [types removal] The parameter include_type_name should be explicitly specified in get template requests to prepare for 7.0. In 7.0 include_type_name will default to 'false', which means responses will omit the type name in mapping definitions.

do you know how to rectify this?

sorry, I don´t know. Maybe because in Version 6 and 7 there will be no Mappings type as before. so it won´t use types as in Version 2 and 5.

I'm giving up to troubleshoot.
It seem like upgrade from version 6.x to 7.x got a lot of issues.
My syslog is coming in to message.log, but i cannot output to elasticsearch...
there is no indices after receiving syslog to message.log..
:frowning_face:

hmm,
okay, but if you want to try another.

Try another index output in elasticsearch.
like

output {
elasticsearch {
hosts => ["10.3.3.30:9200"]
index => "logstash-syslog"
}
stdout { codec => rubydebug }
}

If this one will be created, then you have a problem in your now existing index.
This could be managed by deleteing the existing indices and begin from scratch.

But as I said a few post ago. this is not a logstash problem. it is an elasticsearch problem.

not sure what's wrong..
i clear all existing index.. and try your output..
also nothing being index...
but i can see my syslog coming in to the server /var/log/message.
just cannot output to indices..
trying to troubleshoot this since early of May, still cannot resolve.

i tried to deploy fresh ELK 7.1, everything is working fine..
just the upgrade from ELK 6.x to ELK 7.x giving me such problem...

anyway, really appreciate for your help. Thank you.

Is it really on your port 5514?
Could you run logstash not as a service.

like

/usr/share/logstash/bin/logstash -f /path/to/your/config

then you can see what logstash is really doing. AND remove the elasticsearch output. then we will see if there are any logs incoming.

Yes.. syslog send via port 5514... because i received the syslog from /var/log/messages

with your commands, i saw this..

 [root@z3elk-01 ~]#    /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-syslog.conf

    WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
    Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
    [INFO ] 2019-06-03 14:47:43.495 [main] writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
    [INFO ] 2019-06-03 14:47:43.522 [main] writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
    [WARN ] 2019-06-03 14:47:43.992 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
    [INFO ] 2019-06-03 14:47:44.006 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.1.0"}
    [INFO ] 2019-06-03 14:47:44.040 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"e0fc1966-4471-429e-9bde-a6f394c054d8", :path=>"/usr/share/logstash/data/uuid"}
    [INFO ] 2019-06-03 14:47:54.029 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.3.3.30:9200/]}}
    [WARN ] 2019-06-03 14:47:54.274 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://10.3.3.30:9200/"}
    [INFO ] 2019-06-03 14:47:54.506 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>6}
    [WARN ] 2019-06-03 14:47:54.510 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
    [INFO ] 2019-06-03 14:47:54.546 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.3.3.30:9200"]}
    [INFO ] 2019-06-03 14:47:54.562 [Ruby-0-Thread-5: :1] elasticsearch - Using default mapping template
    [INFO ] 2019-06-03 14:47:54.599 [Ruby-0-Thread-5: :1] elasticsearch - Index Lifecycle Management is set to 'auto', but will be disabled - Your Elasticsearch cluster is before 7.0.0, which is the minimum version required to automatically run Index Lifecycle Management
    [INFO ] 2019-06-03 14:47:54.601 [Ruby-0-Thread-5: :1] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
    [INFO ] 2019-06-03 14:47:54.950 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0x1aaa8ff2 run>"}
    [INFO ] 2019-06-03 14:47:55.385 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
    [INFO ] 2019-06-03 14:47:55.403 [[main]<tcp] tcp - Starting tcp input listener {:address=>"0.0.0.0:5514", :ssl_enable=>"false"}
    [INFO ] 2019-06-03 14:47:55.689 [[main]<udp] udp - Starting UDP listener {:address=>"0.0.0.0:5514"}
    [ERROR] 2019-06-03 14:47:55.802 [[main]<tcp] javapipeline - A plugin had an unrecoverable error. Will restart this plugin.

this error keep repeat...

[ERROR] 2019-06-03 14:47:56.821 [[main]<tcp] javapipeline - A plugin had an unrecoverable error. Will restart this plugin.

what does netstat -tulpen show when logstash isn´t running. maybe the port is already used.

Hi logger,

[root@z3elk-01 ~]# netstat -tulpen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode      PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      0          17052      1023/sshd
tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      0          17854      1030/cupsd
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      0          19877      1490/master
tcp        0      0 10.3.3.41:5601          0.0.0.0:*               LISTEN      986        15833      758/node
tcp        0      0 127.0.0.1:199           0.0.0.0:*               LISTEN      0          18953      1027/snmpd
tcp6       0      0 :::5514                 :::*                    LISTEN      987        3034953    8320/java
tcp6       0      0 10.3.3.41:9200          :::*                    LISTEN      988        835476     5091/java
tcp6       0      0 10.3.3.41:9300          :::*                    LISTEN      988        834550     5091/java
tcp6       0      0 :::22                   :::*                    LISTEN      0          17054      1023/sshd
tcp6       0      0 ::1:631                 :::*                    LISTEN      0          17853      1030/cupsd
tcp6       0      0 ::1:25                  :::*                    LISTEN      0          19878      1490/master
tcp6       0      0 127.0.0.1:9600          :::*                    LISTEN      987        3036946    8320/java
udp        0      0 0.0.0.0:34714           0.0.0.0:*                           70         12226      676/avahi-daemon: r
udp        0      0 0.0.0.0:161             0.0.0.0:*                           0          18952      1027/snmpd
udp        0      0 127.0.0.1:323           0.0.0.0:*                           993        14285      701/chronyd
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           70         12225      676/avahi-daemon: r
udp        0      0 0.0.0.0:5514            0.0.0.0:*                           987        3036943    8320/java
udp6       0      0 ::1:323                 :::*                                993        14286      701/chronyd

why is there udp 0 0 0.0.0.0:5514 0.0.0.0:*

if logstash is running, there should not be this port open.

if you have running a syslog server then you need another port in logstash.
It is not possible to have two services listening on the same port in logstash.

i because of i got this config,

input {
  tcp {
    port => 5514
    type => syslog
  }
  udp {
    port => 5514
    type => syslog
  }
}

tcp & udp listen to 5514 for incoming syslog to logstash.

This should not be a problem. But maybe you try the syslogs input.
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-syslog.html#plugins-inputs-syslog-port
you only need to use

input {
syslog {
port => 5514
type => syslog
}
}

Try this and show me the error, if there are any.

Hi,

I can see my index files now...
just that, it is name by logstash without the date and numbers...

not sure why my default output index is not kick in.

after i put this,

output {
  elasticsearch { hosts => ["10.3.3.30:9200"]
  index => "logstash-%{+YYYY.MM.dd}"
}
  stdout { codec => rubydebug }
}

it is working now !!!
indices with name "logstash-2019.06.04-000001" is coming out now!
i just add in

index => "logstash-%{+YYYY.MM.dd}"

before that, in version ELK 6.x without this, it is working.
any idea why ELK 7.x i need to add this line?