It seems like filebeat publishes logs correctly, but the logs were shown only for one hour.
Filebeat version is 6.2, Logstash version is 6.6, ES is 6.6. Log file is renamed everyday .
In logstash I am getting a lot of Warning. I am not sure the issue is related with the following warnings.
[2019-04-16T04:25:23,598][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-2019.04.16", :_type=>"doc", :routing=>nil}, #LogStash::Event:0x3c24012d], :response=>{"index"=>{"_index"=>"filebeat-2019.04.16", "_type"=>"doc", "_id"=>"xxxxxxxxx", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [host] of type [text]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:241"}}}}}
are you 100% sure you have Filebeat 6.2 everywhere? I think it was in Filebeat 6.3 that a new namespace for host was introduced to be compatible with other beats.
This caused me some gray hairs
If the root cause would be this then either upgrade Filebeat everywhere to at least 6.3 (although this fix might only be effective when the next new index is created.
Or add the Filebeat version to the Elasticsearch index name
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
Or do some Logstash filtering (which is what I did but I'm trying to drop it now)
#############
# Dealing with host name space change in Filebeat 6.3.0
# Should be removed in favour of the new version for compatibility with Metricbeat
if [@metadata][beat] {
mutate {
remove_field => [ "[host]" ]
}
mutate {
add_field => {
"host" => "%{[beat][hostname]}"
}
}
This is to prevent field type collision which causes Logstash to drop documents because Elasticsearch rejects them.
This is a per index limitation. Implementing a fix might not be visible before the new daily index is created. You can get around that by making a temporary new daily index.
Thanks a lot for your answer. Actually, there are multiple filebeats versions in the system, and also different systems are using the one index so there are some type conflicts between systems. First, I will work with team removing these issues by upgrading filebeat versions.
@A_B I thought it is fixed by adding filebeat version number, it is not fixed. The issue of warning is fixed, but I don't have no other error in logstash. But it didn't come all logfiles. Is it possible that if the log file is too big (about 1~2 Gig), this kind of error will happen?
if you you get no warnings from Logstash then the problem I assumed you were having should be fixed. Large log files should not be a problem. Sometimes it takes some time for the logs to show up if there is a lot of data to index. Indexing starts from the oldest logs.
Do you see indexes created in Elasticsearch? Are there new documents being indexed? You see that in Kibana > Monitoring. Order indices by "Index Rate" and you will see which indices are getting new documents. If there are no logs coming into Elasticsearch and Logstash is not giving any errors then I would look at Filebeat.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.