Odd indices created automatically - what is happening?

Hi,

Just started to use ES 2.1.1. I have several servers with a logstash 1.5.6-1. Now I noticed, that in addition to the correct daily index, that is created automatically:

ylefi_drupalsv-2016.01.08 5 2 576213 0 1.1gb 390mb

... there is plenty of odd indices like:

green open ylefi_drupalsv-2016.12.18 5 2 1 0 40.5kb 13.5kb
green open ylefi_drupalsv-2016.12.19 5 2 2 0 78.4kb 26.1kb
green open ylefi_drupalsv-2016.12.13 5 2 2 0 75.6kb 25.2kb
green open ylefi_drupalsv-2016.11.20 5 2 1 0 39.5kb 13.1kb

Those odd indices contain just something like this:

{"scrollid":"c2Nhbjs1OzY2MzpUN3JqaTdDZlFIS3NjR0E5T0xHSnZBOzI2Njk2Mjp2WFU3aG96RlRtT2lYSU9xNGRMRzV3OzI2Njc3MjpWTXIzU3dkSlNydTNucXZGU2tYbmV3OzY1ODppRU5QeEhhcFQ0ZUVVUzZYNEJGSDRBOzY3MzpJQ0NBb2p6UFJCcWZscnZsYnJyRXF3OzE7dG90YWxfaGl0czoxOw==","took":2,"timed_out":false,"shards":{"total":5,"successful":5,"failed":0},"hits":{"total":1,"maxscore":0.0,"hits":[]}}

So who is creating those and why and how to stop it? If I delete those, more is coming.

Thanks.

Interesting. I think it's more a logstash question though.

It looks like a mis usage or a bug in LS. Something is apparently PUTting scroll requests...
Are you somehow reading from elasticsearch (elasticsearch input plugin)?

What is your logstash configuration?

My logstash output is:

output {
  elasticsearch {
    host => "x.x.x.x"
    port => 9200
    protocol => "http"
    index => "ylefi_drupalsv-%{+YYYY.MM.dd}"
    manage_template => false
    template_overwrite => true
  }
}

Logstash input-conf is:

input {
  file {
    path => "/var/log/syslog"
    type => "syslog"
    start_position => "end"
  }
}

input {
  file {
    path => "/var/log/apache2/access.log"
    type => "drupalsv_apache_access"
    start_position => "end"
  }
}

input {
  file {
    path => "/var/log/apache2/error.log"
    type => "drupalsv_apache_error"
    start_position => "end"
  }
}

filter {

if [type] == "drupalsv_apache_access" {
    grok {
      match => [ "message", "%{COMBINEDAPACHELOG} %{NOTSPACE:vhost} %{NUMBER:resp_time}" ]
      add_field => { "server" => "web1.drupalsv" }
      add_field => { "service" => "synd" }
    }
 } 

if [type] == "drupalsv_apache_error" {
    
    grok {
      patterns_dir => [ "/etc/logstash/patterns.d" ]
      match => [ "message", "%{APACHE_ERROR_LOG}" ]
      add_field => { "server" => "web1.drupalsv" }
      add_field => { "service" => "synd" }
    }

 } 

  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
} 

This has been working fine with ES 1.x but now in ES2 this problem comes. My logstash is ver 1 so maby I have to upgrade those if it has something to do with this...?

Yes please update logstash as well.

I moved your thread to logstash group.

I updated logstashes to the newest 2.1.1 and deleted all the indices. Immediately the correct daily index was created automatically:

green open ylefi_drupalsv-2016.01.08 5 2 15090 0 48.3mb 13.7mb

... and also those odd indices started to appear:

green open ylefi_drupalsv-2016.12.28 5 2 1 0 39.3kb 13.1kb
green open ylefi_drupalsv-2016.12.29 5 2 2 0 79.1kb 26.3kb
...

so the problem stays.