Filebeat - It is not creating index on Elasticsearch

Hello!

I'm trying to configure the FIlebeat. My configuration is Filebeat > Logstash > Elasticsearch > Kibana.
However there is something wrong and I couldn't figure it out. There are some errors in the log and the index is not created on Elastic search.


Elasticsearch index list

curl -XGET 'http://servername:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
green open %{[@metadata][beat]}-2015.12.11 5 0 2138573 0 535.3mb 535.3mb
yellow open %{[@metadata][beat]}-2015.12.02 5 1 6844 0 536.5kb 536.5kb
yellow open %{[@metadata][beat]}-2015.12.15 5 1 1959835 0 493mb 493mb
yellow open %{[@metadata][beat]}-2015.12.14 5 1 3935630 0 978mb 978mb
yellow open %{[@metadata][beat]}-2015.12.28 5 1 9078 0 730.3kb 730.3kb
yellow open %{[@metadata][beat]}-2015.12.12 5 1 4829859 0 1.1gb 1.1gb
yellow open %{[@metadata][beat]}-2015.11.27 5 1 127107 0 37.4mb 37.4mb
yellow open %{[@metadata][beat]}-2015.12.01 5 1 6925 0 581.3kb 581.3kb
yellow open topbeat-2016.01.22 5 1 85458 0 19.6mb 19.6mb
yellow open %{[@metadata][beat]}-2015.12.29 5 1 4578 0 403.5kb 403.5kb
yellow open %{[@metadata][beat]}-2015.12.13 5 1 4782712 0 1.1gb 1.1gb
green open .kibana 1 0 111 5 97.1kb 97.1kb


Logstash configuration file

input {
beats {
port => 5044
}
}

output {
elasticsearch {
hosts => "servername:9200"
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}


Filebeat configuration file

filebeat:
prospectors:
-
paths:
- "/var/log/*.log"

output:
logstash:
hosts: ["servername:5044"]


Elasticsearch log

{:timestamp=>"2016-01-22T19:30:16.070000+0100", :message=>"Beats input: unhandled exception", :exception=>#<TypeError: The field '@timestamp' must be a (LogStash::Timestamp, not a String (2016-01-22T18:19:11.050Z)>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.0.0-java/lib/logstash/event.rb:138:in []='", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-0.9.2/lib/logstash/inputs/beats.rb:138:increate_event'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-


Filebeat log

2016/01/22 18:41:14.172168 prospector.go:186: DBG Start next scan
2016/01/22 18:41:14.172297 prospector.go:207: DBG scan path /var/log/*.log
2016/01/22 18:41:14.172826 prospector.go:219: DBG Check file for harvesting: /var/log/net-snmpd.log
2016/01/22 18:41:14.172859 prospector.go:341: DBG Update existing file for harvesting: /var/log/net-snmpd.log
2016/01/22 18:41:14.172878 prospector.go:383: DBG Not harvesting, file didn't change: /var/log/net-snmpd.log
2016/01/22 18:41:14.172898 prospector.go:219: DBG Check file for harvesting: /var/log/pbl.log
2016/01/22 18:41:14.172919 prospector.go:341: DBG Update existing file for harvesting: /var/log/pbl.log
2016/01/22 18:41:14.172932 prospector.go:383: DBG Not harvesting, file didn't change: /var/log/pbl.log

Anybody have any idea what is wrong?

Thanks in advance.

Judging by Issue with filebeat & logstash "Beats input: unhandled exception" it seems you should update your beats plugin.

I've update the logstash. Now I have these versions:

logstash-input-beats (2.0.3)
logstash 2.1.1
Version: 2.0.0, Build: de54438/2015-10-22T08:09:48Z, JVM: 1.7.0_80
4.2.0

The index is created now. However (I don't know if I need to create another topic), there are lots of duplicate documents. For example, Topbeat has more than 1 million, but it not should be.

health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open topbeat-2016.01.26 5 1 1046432 0 263.5mb 263.5mb
yellow open filebeat-2016.01.26 5 1 37272 0 7.1mb 7.1mb

I had lots of different filters in the same config directory. To solve the duplicate documents I put conditions on filter files, for example:

Filter 1.
if [type] == "systemout_tmc"{
.....

}

Filter 2.

if [type] == "systemout_gato"{
.....

}