Index not created after template update

Dear Community,

I need your help again. Following situation:
I´ve downloaded the default logstash template with:

curl -XGET '192.168.100.92:9200/_template/logstash?pretty' > logstash-template.json

updated the default logstash template to:

{
"index_patterns" : [
  "logstash-*"
],
"settings" : {
  "number_of_shards" : 1,**
  "number_of_replicas" : 0,**
  "index" : {
    "refresh_interval" : "5s"
  }
},
"mappings" : {
  "_default_" : {
    "dynamic_templates" : [
      {
        "message_field" : {
          "path_match" : "message",
          "match_mapping_type" : "string",
          "mapping" : {
            "type" : "text",
            "norms" : false
          }
        }
      },
      {
        "string_fields" : {
          "match" : "*",
          "match_mapping_type" : "string",
          "mapping" : {
            "type" : "text",
            "norms" : false,
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          }
        }
      }
    ],
    "properties" : {
      "@timestamp" : {
        "type" : "date"
      },
      "@version" : {
        "type" : "keyword"
      },
      "geoip" : {
        "dynamic" : true,
        "properties" : {
          "ip" : {
            "type" : "ip"
          },
          "location" : {
            "type" : "geo_point"
          },
          "latitude" : {
            "type" : "half_float"
          },
          "longitude" : {
            "type" : "half_float"
          }
        }
      }
    }
  }
},
"aliases" : { }
}

Especially I added only the number of shards and replicas settings. I stopped logstash and uploaded them and started logstash with following command:

curl -XPUT -H 'Content-Type: application/json' '192.168.100.92:9200/_template/logstash' -d "@logstash-template.json" 

directly from the server.

From this time no new specific index has been created. There must be created an index because the source log files were delivered with syslog every few seconds. So new logs are existing.

After this I started logstash in debug mode and tried to figured out why no index is created. The last index is from 2019-01-19 and the new logstamps are from today/now.

In the debug log file I see that logstash reads those source log files:

[2019-01-21T15:31:18,871][DEBUG][logstash.inputs.file     ] Received line {:path=>"/LOGS/ASA/10.0.99.254.log", :text=>"2019-01-21T15:26:00+01:00 ASA-IX : %ASA-6-302015: Built outbound UDP connection 64916574 for outside:xxx.xxx.xxx.xxx/xxx (xxx.xxx.xxx.xxx/xxx) to inside:xxx.xxx.xxx.xxx/xxx (xxx.xxx.xxx.xxx/xxxxx)"}
[2019-01-21T15:31:18,871][DEBUG][logstash.util.decorators ] filters/LogStash::Filters::Mutate: adding value to field {"field"=>"syslog_server_domain", "value"=>["number1.at"]}
[2019-01-21T15:31:18,861][DEBUG][logstash.util.decorators ] filters/LogStash::Filters::Mutate: adding tag {"tag"=>"number1"}

OR

[2019-01-21T15:31:42,778][DEBUG][logstash.pipeline        ] filter received {"event"=>{"path"=>"/LOGS/192.168.99.254.log", "message"=>"2019-01-21T15:25:31+01:00 ASA-VIE : %ASA-6-302014: Teardown TCP connection 935611524 for DMZ:xxx.xxx.xxx.xxx/xxx to inside:xxx.xxx.xxx.xxx/xxx duration 0:00:00 bytes 376 TCP FINs from DMZ", "host"=>"LOG10", "type"=>"number2", "@version"=>"1", "@timestamp"=>2019-01-21T14:31:42.017Z}}
[2019-01-21T15:31:42,778][DEBUG][logstash.pipeline        ] filter received {"event"=>{"path"=>"/LOGS/192.168.99.254.log", "message"=>"2019-01-21T15:25:31+01:00 ASA-VIE : %ASA-6-302013: Built inbound TCP connection 935611525 for outside:xxx.xxx.xxx.xxx/xxx (xxx.xxx.xxx.xxx/xxx) to DMZ:xxx.xxx.xxx.xxx/xxx/443 (xxx.xxx.xxx.xxx/xxx/443)", "host"=>"LOG10", "type"=>"number2", "@version"=>"1", "@timestamp"=>2019-01-21T14:31:42.017Z}}
[2019-01-21T15:31:42,778][DEBUG][logstash.filters.grok    ] Event now:  {:event=>#<LogStash::Event:0x602398de>}
[2019-01-21T15:31:42,779][DEBUG][logstash.util.decorators ] filters/LogStash::Filters::GeoIP: adding value to field {"field"=>"[geoip][coordinates]", "value"=>["%{[geoip][longitude]}", "%{[geoip][latitude]}"]}
[2019-01-21T15:31:42,779][DEBUG][logstash.util.decorators ] filters/LogStash::Filters::GeoIP: adding value to field {"field"=>"[geoip][coordinates]", "value"=>["%{[geoip][longitude]}", "%{[geoip][latitude]}"]}
[2019-01-21T15:31:42,779][DEBUG][logstash.util.decorators ] filters/LogStash::Filters::Mutate: adding tag {"tag"=>"cisco-number2"}

So it seems that my logstash config still works (why not) and the lines were processed.

But no index is created. Any ideas what I can trie to verify?

Regards
Wilhelm

Now I have enabled debug logging in elasticsearch but I still have no clue what happens after logstash reading those lines or better why no index gets created.

And thats the status from my ELK:

{
"cluster_name" : "VOR-ElasticStack",
"status" : "red",
"timed_out" : true,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}

Also no unassigned shards:

[root@LOG10 tmp]# curl -XGET -H 'Content-Type: application/json'  http://192.168.100.92:9200/_cat/shards | grep UNASSIGNED
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                             Dload  Upload   Total   Spent    Left  Speed
100 25098  100 25098    0     0  27859      0 --:--:-- --:--:-- --:--:-- 27855

You only have one node without any indices and it is still red? Have you by any chance configured it to not be master-eligible?? All clusters need to have a master node.

Hello Christian,

thanks for your reponse.

I have only one node with about 240 indices.

logstash-cisco-asa-2019.01.08 4 p STARTED  6036221    2.1gb 192.168.100.92 LOG10
logstash-cisco-asa-2019.01.08 1 p STARTED  6030252    2.1gb 192.168.100.92 LOG10
logstash-cisco-asa-2019.01.08 2 p STARTED  6034864    2.1gb 192.168.100.92 LOG10
logstash-cisco-asa-2019.01.08 3 p STARTED  6033081    2.1gb 192.168.100.92 LOG10
logstash-cisco-asa-2019.01.08 0 p STARTED  6029945    2.1gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2018.12.24 1 p STARTED  3804235      1gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2018.12.24 4 p STARTED  3802474      1gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2018.12.24 2 p STARTED  3804859      1gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2018.12.24 3 p STARTED  3802819      1gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2018.12.24 0 p STARTED  3804658      1gb 192.168.100.92 LOG10
logstash-cisco-asa-2018.12.30 1 p STARTED  4947539    1.6gb 192.168.100.92 LOG10
logstash-cisco-asa-2018.12.30 4 p STARTED  4950514    1.7gb 192.168.100.92 LOG10
logstash-cisco-asa-2018.12.30 2 p STARTED  4949645    1.6gb 192.168.100.92 LOG10
logstash-cisco-asa-2018.12.30 3 p STARTED  4945562    1.6gb 192.168.100.92 LOG10
logstash-cisco-asa-2018.12.30 0 p STARTED  4948954    1.6gb 192.168.100.92 LOG10
logstash-cisco-asa-2018.12.25 1 p STARTED  4933386    1.7gb 192.168.100.92 LOG10
logstash-cisco-asa-2018.12.25 4 p STARTED  4936090    1.7gb 192.168.100.92 LOG10
logstash-cisco-asa-2018.12.25 2 p STARTED  4932203    1.7gb 192.168.100.92 LOG10
logstash-cisco-asa-2018.12.25 3 p STARTED  4935771    1.7gb 192.168.100.92 LOG10
logstash-cisco-asa-2018.12.25 0 p STARTED  4937725    1.7gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2019.01.13 1 p STARTED  4246955    1.1gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2019.01.13 4 p STARTED  4246539    1.1gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2019.01.13 2 p STARTED  4246747    1.1gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2019.01.13 3 p STARTED  4246868    1.1gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2019.01.13 0 p STARTED  4246759    1.1gb 192.168.100.92 LOG10
logstash-cisco-asa-2019.01.14 4 p STARTED  4267015    1.5gb 192.168.100.92 LOG10
logstash-cisco-asa-2019.01.14 1 p STARTED  4264693    1.5gb 192.168.100.92 LOG10
logstash-cisco-asa-2019.01.14 2 p STARTED  4265229    1.5gb 192.168.100.92 LOG10
logstash-cisco-asa-2019.01.14 3 p STARTED  4264310    1.5gb 192.168.100.92 LOG10
logstash-cisco-asa-2019.01.14 0 p STARTED  4264352    1.5gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2019.01.18 4 p STARTED  4668361    1.2gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2019.01.18 1 p STARTED  4669711    1.2gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2019.01.18 2 p STARTED  4668782    1.2gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2019.01.18 3 p STARTED  4669045    1.2gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2019.01.18 0 p STARTED  4670150    1.2gb 192.168.100.92 LOG10
.monitoring-kibana-6-2019.01.16   0 p STARTED    11320    2.2mb 192.168.100.92 LOG10
logstash-cisco-asa-2019.01.17 4 p STARTED  6208960    2.2gb 192.168.100.92 LOG10
logstash-cisco-asa-2019.01.17 1 p STARTED  6214283    2.2gb 192.168.100.92 LOG10
logstash-cisco-asa-2019.01.17 2 p STARTED  6212543    2.2gb 192.168.100.92 LOG10
logstash-cisco-asa-2019.01.17 3 p STARTED  6211887    2.2gb 192.168.100.92 LOG10
logstash-cisco-asa-2019.01.17 0 p STARTED  6207594    2.2gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2018.12.27 1 p STARTED  4544440    1.1gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2018.12.27 4 p STARTED  4546514    1.2gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2018.12.27 2 p STARTED  4543836    1.2gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2018.12.27 3 p STARTED  4543216    1.2gb 192.168.100.92 LOG10
logstash-cisco-asa-cisco-asa-2-2018.12.27 0 p STARTED  4542274    1.2gb 192.168.100.92 LOG10
logstash-cisco-asa-2019.01.09 1 p STARTED  6048058    2.1gb 192.168.100.92 LOG10
logstash-cisco-asa-2019.01.09 4 p STARTED  6048491    2.1gb 192.168.100.92 LOG10
logstash-cisco-asa-2019.01.09 2 p STARTED  6046509    2.1gb 192.168.100.92 LOG10

All those indices worked very well until i tried an additional log type an imported a couple of years of source log files.
After this my elasticsearch instance stopped because it had 23000 active shards.

After I noticed this information I deleted all indices from the "new additional" log type manually and freed my elastic instance.
Directly after I changed the template from logstash that any new indices have only one shard without replica.

Since my elastic instance stopped because of the 23000 shards no new logstash index has been created.

Also it seems weird that no active_primary_shards are available.

23000 shards per node is far too much. Please read this blog post before you start indexing into the cluster again and change your sharding strategy.

Dear Christian,

I know that information. As I wrote that when I noticed this I deleted all indices which were created from the new log type...

I tried to get this back up and running since 2 days.

I´ve read your post article but unfortunately I could not find a solution for my situation.

What is in the Elasticsearch logs? As this can be a lot, it might be good to create a gist and link to it here rather than paste it all.

Well I couldn´t find anything inside them. Espacially why no index has been created.

I would do that but what is a gist?

Now I have restarted elasticsearch and logstash and also created new log files for better investigation.
I have uploaded the Elasticsearch logs to my private cloud plattform:

Elasticsearch logs about 27MB
Logstash logs about 700MB

Also I tried to create an test-index manually:

curl -XPUT -H 'Content-Type: application/json' '192.168.100.92:9200/aa_test_index?pretty' -d '{&quot;settings&quot; : {&quot;index&quot; : {&quot;number_of_shards&quot; : 1,&quot;number_of_replicas&quot; : 0 }}}'

which basically works. So elasticsearch has no problems at creating an index.
It seems that logstash does not send those information to elasticsearch.
Maybe this helps.

Elasticserver is now on green state because I used a false query:

[root@LOG10 ~]# curl -XGET '192.168.100.92:9200/_cluster/health?pretty'
{
 "cluster_name" : "VOR-ElasticStack",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 290,
"active_shards" : 290,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}

But still no new logstash shards.:sob:

is solved

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.