Logstash not creating (some?) indexes in ES, no error in logs

Hi all :slight_smile:

I have the following configuration:

input {
    beats {
        port => "5044"
        host => ""
    rabbitmq {
        host => "localhost"
        port => 5672
        user => "yyyyyyy"
        password => "xxxxxxx"
        vhost => "lala"
        queue => "lolCalls"
        add_field => {
            "[@metadata][beat]" => "lolCalls"
            "[@metadata][type]" => "event"
filter {
    # Data gets transformed here, shouldn't have impact however
output {
    elasticsearch {
        hosts => localhost
        manage_template => false
        index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
        document_type => "%{[@metadata][type]}"
    #stdout { codec => rubydebug { metadata => true } }

Everything coming in from either filebeat or metricbeat gets created just fine and is accepting incoming data, status is yellow, but this is my development test server, so that's normal as I have only one node:

GET _cat/indices

yellow open metricbeat-2017.09.05 YVASOlvETmONgcfF88mITg 5 1 134374 0   43.7mb   43.7mb
yellow open filebeat-2017.09.05   Zwn4HLKaQYKec5pA_-fJPw 5 1   1471 0 1006.2kb 1006.2kb
yellow open .kibana               hRbZQrUpTh-TBEQmcRG8-A 1 1     35 0    115kb    115kb

However, my lolCalls-(date) index just won't be created. If I enable the stdout as well, I get to see my data just fine:

   "basicInformation" => {
              "endpoint" => "XXXXX",
    "siteClassification" => "XXXXX",
             "machineId" => "XXXXX",
           "siteVariant" => "XXXXX",
                  "ruid" => "XXXXX",
            "frontendId" => 999
               "curl" => {
           "headerSize" => 346,
      "preTransferTime" => 0.062061,
       "nameLookupTime" => 0.031032,
         "downloadSize" => 3417,
         "redirectTime" => 0,
            "totalTime" => 0.191588,
          "connectTime" => 0.037858,
             "httpCode" => 200,
          "requestSize" => 358,
    "startTransferTime" => 0.191565
         "@timestamp" => 2017-09-05T13:44:54.027Z,
          "@metadata" => {
    "beat" => "lolCalls",
    "type" => "event"
           "@version" => "1",
               "tags" => [
    [0] "_geoip_lookup_failure"
    "APICallResponse" => [
        [0] {
            "XXXXX" => {
                "XXXXX" => [
                    // A lot of data
        "direct_payment" => 1,
                  "date" => "2017-09-22"

Where can I begin to look? The rabbitmq connection seems to go fine, as I can filter it and the entry actually gets deleted from the queue.
The only difference between the file- and metricbeat entries is that I had already a template for , but that shouldn't impact the creation of an index right?


Ok, found the problem!

After turning on debug mode in logstash, I noticed no strange behaviour. Logstash was passing the message to the output filter and that's it.

Then I turned my eyes to elasticsearch. So I turned on debug mode and to my surprise... there was an actual error!

[2017-09-06T14:49:33,907][DEBUG][o.e.c.s.ClusterService   ] [Va9iXHa] processing [create-index [lolCalls-2017.09.06], cause [auto(bulk api)]]: execute
[2017-09-06T14:49:33,909][DEBUG][o.e.c.m.MetaDataCreateIndexService] [Va9iXHa] [lolCalls-2017.09.06] failed to create
org.elasticsearch.indices.InvalidIndexNameException: Invalid index name [lolCalls-2017.09.06], must be lowercase
        at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.validateIndexName(MetaDataCreateIndexService.java:144) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.validate(MetaDataCreateIndexService.java:495) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.access$000(MetaDataCreateIndexService.java:106) ~[elasticsearch-5.5.2.jar:5.5.2]
        at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$1.execute(MetaDataCreateIndexService.java:239) ~[elasticsearch-5.5.2.jar:5.5.2]

So, the solution was simple: convert the index name to lowercase and problem solved, logstash (well, actually elasticsearch) is now able to create an index.

Just leaving this here for other people to read.

1 Like

@unreal4u I'm glad to hear you figured it out. I wanted to add a distinction between Logstash creating indices, and what Logstash actually does.

By default, Elasticsearch allows you to send documents to a named index, whether that index exists or not. If it doesn't exist, Elasticsearch will create it (and apply any index mapping templates you may have that match the index name/pattern). This is what Logstash does. At no point does it ever actually issue a "create index" API call. If it had to do this, it would interrupt the data stream. It is a common misconception, but a key one to understand when troubleshooting issues such as this. You probably would have looked to the Elasticsearch logs sooner, had you understood this, saving you some troubleshooting time.

I add this here in the hopes that it helps other users who may find your post.

Indeed, after analyzing the logs I noticed that the index wasn't created by logstash itself, but rather by ekasticsearch, which then in turn made sense.

Just to add a few cents, for all those interested and who are just starting in the java/ELK stack world, you can enable debug logging by editing (for elasticsearch at least) /etc/elasticsearch/log4j2.properties, changing "status" to "debug". This will allow you to start the daemon in debug mode. You can also stop the daemon and pass on the log level via the CLI, which was useful in my case for logstash.

There is also level trace, but in this case, the error started to appear on the debug level. It would be better to maybe generate a warning, but I guess elasticsearch has its own reasons.


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.