Yes correct i have made that change, now the indices "Access" and "bpm" are not being created in elasticsearch.
Should i have these in filebeat.yml config?
Could you try to write the events to file on the filebeat side and see if there are also duplicates? If not, there is probably something going on on the LS side as part of your script. Check also your filebeat logs of there are some errors inside related to sending.
As you have it in the file only once, I think either something during the transmission is going wrong, LS script has some issues. You could try to enable debug output in LS to see if it appears there twice.
How did you install filebeat? You could also check in your config file where the log file should be.
Ya i enabled "stdout { codec => rubydebug}" but there is no duplication.
No discrepancy in FIlebeat logs as well. Am running filebeat 1.3.1 is updating necessary?
This is my logstash.conf file i have used conditionals to differentiate the events.
input {
beats {
port => 5044
}
}
filter {
if "access_logs" in [tags] {
grok {
match => {
"message" => [
"%{IPORHOST:x_forwarded_for} %{IPORHOST:load_balancer} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb}%{SPACE} \/%{WORD:application}%{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-) (?:%{WORD:ServerHost}:%{WORD:ServerPort})",
"%{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} \/%{WORD:application}%{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-) (?:%{WORD:ServerHost}:%{WORD:ServerPort})",
"%{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} \/%{WORD:application}%{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-)"
]
}
}
}
if "BPM" in [tags] {
grok {
match => {
"message" => [
"%{SYSLOG5424SD}%{SPACE}%{BASE16NUM:ThreadID}%{SPACE}%{WORD:ShortName}%{SPACE}%{WORD:EventType}%{SPACE}%{WORD:MessageIdentifier}:%{SPACE}%{GREEDYDATA:event}",
"%{SYSLOG5424SD}%{SPACE}%{WORD:ThreadID}%{SPACE}%{WORD:Logger}%{SPACE}%{WORD:MessageType}%{SPACE}%{GREEDYDATA:event}"
]
}
}
}
if "syslog" in [tags] {
grok {
match => {
"message" => [
"%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}"
]
}
}
}
}
output {
stdout { codec => rubydebug}
if [type] == "access_log" {
elasticsearch {
hosts => ["10.190.188.174:9200"]
index => "access-%{+YYYY.MM.dd}"
}
}
else if [type] == "BPM" {
elasticsearch {
hosts => ["10.190.188.174:9200"]
index => "bpm-%{+YYYY.MM.dd}"
}
}
else if [type] == "syslog" {
elasticsearch {
hosts => ["10.190.188.174:9200"]
index => "syslog-%{+YYYY.MM.dd}"
}
}
#else if [type] == "JDBC" {
#elasticsearch {
#hosts => ["10.190.188.174:9200"]
#index => "chd-%{+YYYY.MM.dd}"
#}
#}
}
It will definitively not hurt to update to the most recent release of filebeat and if possible also the rest of the stack. Duplicates can happen if you have network issues but in general I would expect this to be an edge case.
Could you try to wipe your complete setup and try it again with the most recent version and see if it still happens.?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.