Logstash does not make indices in elasticsearch

Hello, i got in trouble. Logstash does not make indeces in elasticsearch.
Here is my conf file.

-- logstash.conf --

input{
file {
path => "/home/wonki/access_log"
start_position => "beginning"
sincedb_path => "/dev/null"
ignore_older => 0
}
}

filter{
grok {
match => {"message" => "%{COMBINEDAPACHELOG}"}
}
date {
match => ["timestamp", "dd/MMM/yyyy:HH:mm:ss Z"]
}
}

output{
elasticsearch{
hosts => ["127.0.0.1:9200"]
}
stdout{
codec => rubydebug
}
}

and when i get the command
/bin/logstash -f /etc/logstash/conf.d/logstash.conf
i can see stdout correctly but when i check the elasticsearch by using the command
curl -X GET 'localhost:9200/_cat/indices?v'
there is no indices and also it did not reflected kibana.
What should i do to get indices in elasticsearch?
I added sincedb_path to clear sincedb but it did not help.
Please help me.

What happens if you remove this?

Thank you for your comment.
Nothing changed. There is no indices in elasticsearch yet.
Actually, ignore_older option does not exist at first, but attempting to solve this problem has been added.

Do you see any data coming out via stdout and the rubydebug codec? Do you see any errors when you start it in the foreground? Is there anything in the Elasticsearch logs indicating a problem?

Here is sample of my example apache data.

54.210.20.202 - - [30/Apr/2017:04:33:58 +0000] "POST /wp-cron.php?doing_wp_cron=1493526837.9989309310913085937500 HTTP/1.1" 200 - "http://sundog-soft.com/wp-cron.php?doing_wp_cron=1493526837.9989309310913085937500" "WordPress/4.7.4; http://sundog-soft.com"
217.118.90.49 - - [30/Apr/2017:04:33:57 +0000] "GET /wp-login.php HTTP/1.1" 200 5226 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0"
54.210.20.202 - - [30/Apr/2017:04:33:59 +0000] "POST /wp-cron.php?doing_wp_cron=1493526839.5685200691223144531250 HTTP/1.1" 200 - "http://sundog-soft.com/wp-cron.php?doing_wp_cron=1493526839.5685200691223144531250" "WordPress/4.7.4; http://sundog-soft.com"
217.118.90.49 - - [30/Apr/2017:04:33:58 +0000] "POST /wp-login.php HTTP/1.1" 200 6182 "http://sundog-soft.com/wp-login.php" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0"
109.163.234.2 - - [30/Apr/2017:04:34:09 +0000] "GET / HTTP/1.1" 200 20503 "-" "Mozilla/5.0 (Windows NT 5.1; rv:7.0.1) Gecko/20100101 Firefox/7.0.1"
54.210.20.202 - - [30/Apr/2017:04:34:11 +0000] "POST /wp-cron.php?doing_wp_cron=1493526851.0895419120788574218750 HTTP/1.1" 200 - "http://sundog-soft.com/wp-cron.php?doing_wp_cron=1493526851.0895419120788574218750" "WordPress/4.7.4; http://sundog-soft.com"

And here is sample of data coming out via stdout when i ran logstash.

{
"response" => "304",
"referrer" => ""-"",
"httpversion" => "1.1",
"host" => "chef_node1",
"request" => "/feed/",
"@timestamp" => 2017-04-30T12:25:36.000Z,
"auth" => "-",
"path" => "/home/wonki/access_log",
"verb" => "GET",
"message" => "8.29.198.25 - - [30/Apr/2017:12:25:36 +0000] "GET /feed/ HTTP/1.1" 304 - "-" "Feedly/1.0 (+http://www.feedly.com/fetcher.ht ml; like FeedFetcher-Google)"",
"@version" => "1",
"ident" => "-",
"agent" => ""Feedly/1.0 (+http://www.feedly.com/fetcher.html; like FeedFetcher-Google)"",
"timestamp" => "30/Apr/2017:12:25:36 +0000",
"clientip" => "8.29.198.25"
}

[WARN ] 2018-08-20 15:18:12.420 [Ruby-0-Thread-7@[main]>worker0: :1] elasticsearch - Could not index event to Elasticsearch. {:status=>400, :action=>[ "index", {:_id=>nil, :_index=>"logstash-2017.04.30", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x59f3dc0d], :response=>{"index"=>{"_index"=>" logstash-2017.04.30", "_type"=>"doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"Failed to parse mapping [de fault]: [include_in_all] is not allowed for indices created on or after version 6.0.0 as [_all] is deprecated. As a replacement, you can use an [copy _to] on mapping fields to create your own catch all field.", "caused_by"=>{"type"=>"mapper_parsing_exception", "reason"=>"[include_in_all] is not allo wed for indices created on or after version 6.0.0 as [_all] is deprecated. As a replacement, you can use an [copy_to] on mapping fields to create your own catch all field."}}}}}

It seems you have an invalid or out-of-date index template that prevents the data from being indexed. Correct that and you should be fine.

Thanks for your advice, christian.
But can you tell me how to correct index template..?
I think combindedapachelog filter makes index template, so does it mean that using combindedapachelog filter is the cause of the out-of-date index template problem?
If i use another filter, is the problem solved?

I'm really sorry to bother you, christian

Well, after adding 'index => "apachelogs" ' in logstash output configuration, it successfully loaded in elasticsearch and i can find index in kibana also. But i still wondering that why the above issue occurred in detail and how can i update index template.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.