Logstash 6.6.0 sends logs to ES only few hours a day

I have several Elastic Searches clusters in several Kubernetes Clusters.

Several VMs are sending the logs through filebeat to Logstash.
Some logs show very wired pattern.

The following graph shows that the logs were sent to ES at 5:-00 pm for about one hour.

It seems like filebeat publishes logs correctly, but the logs were shown only for one hour.
Filebeat version is 6.2, Logstash version is 6.6, ES is 6.6. Log file is renamed everyday .

In logstash I am getting a lot of Warning. I am not sure the issue is related with the following warnings.

[2019-04-16T04:25:23,598][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-2019.04.16", :_type=>"doc", :routing=>nil}, #LogStash::Event:0x3c24012d], :response=>{"index"=>{"_index"=>"filebeat-2019.04.16", "_type"=>"doc", "_id"=>"xxxxxxxxx", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [host] of type [text]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:241"}}}}}

Hi @Jaepyoung_Kim,

are you 100% sure you have Filebeat 6.2 everywhere? I think it was in Filebeat 6.3 that a new namespace for host was introduced to be compatible with other beats.

This caused me some gray hairs :slight_smile:

If the root cause would be this then either upgrade Filebeat everywhere to at least 6.3 (although this fix might only be effective when the next new index is created.

Or add the Filebeat version to the Elasticsearch index name

index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"

Or do some Logstash filtering (which is what I did but I'm trying to drop it now)

   #############
  # Dealing with host name space change in Filebeat 6.3.0
  # Should be removed in favour of the new version for compatibility with Metricbeat
  if [@metadata][beat] {
    mutate {
      remove_field => [ "[host]" ]
    }
    mutate {
      add_field => {
    	  "host" => "%{[beat][hostname]}"
      }
    }

This is to prevent field type collision which causes Logstash to drop documents because Elasticsearch rejects them.

This is a per index limitation. Implementing a fix might not be visible before the new daily index is created. You can get around that by making a temporary new daily index.

Thanks a lot for your answer. Actually, there are multiple filebeats versions in the system, and also different systems are using the one index so there are some type conflicts between systems. First, I will work with team removing these issues by upgrading filebeat versions.

@A_B I thought it is fixed by adding filebeat version number, it is not fixed. The issue of warning is fixed, but I don't have no other error in logstash. But it didn't come all logfiles. Is it possible that if the log file is too big (about 1~2 Gig), this kind of error will happen?

Hi @Jaepyoung_Kim,

if you you get no warnings from Logstash then the problem I assumed you were having should be fixed. Large log files should not be a problem. Sometimes it takes some time for the logs to show up if there is a lot of data to index. Indexing starts from the oldest logs.

Do you see indexes created in Elasticsearch? Are there new documents being indexed? You see that in Kibana > Monitoring. Order indices by "Index Rate" and you will see which indices are getting new documents. If there are no logs coming into Elasticsearch and Logstash is not giving any errors then I would look at Filebeat.

Thanks! @A_B. I checked logstash monitoring and no errors were founded. The number of events received and emitted are same.

As you told, I need to look at filebeat sides.

Thanks,
Jae

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.