Running into problems moving from Logstash ingestion to Filebeat harvesting

I'm trying to move from logstash ingestion of log files using the file input filter to using Filebeat to harvest the log files and the problem I'm running into is that when I get filebeat and logstash set up and restart the services I'm getting the following errors--

_[2018-08-07T14:02:57,088][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"requestlogv2-2018.08.07", :_type=>"request", :_routing=>nil}, #LogStash::Event:0x2b85e65d], :response=>{"index"=>{"_index"=>"requestlogv2-2018.08.07", "_type"=>"request", "id"=>"9kuOFWUBX_Kk-dHoXHkQ", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [host]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:85"}}}}}

In order to get it to work properly is I have to delete the existing elasticsearch index template(s) and indices and basically start from scratch by then starting elasticsearch and then when data is populated into the repository go into kibana and reload the index template.

This is alright in my lower environments but doing it in production I will lose my history so I would really like to figure what the problem is and fix that instead of blowing things away and starting from scratch.

TIA,
Bill

Update --

Further research has revealed that the issue is being triggered by a field in the log file being too long for the corresponding field in the elasticsearch index. The interesting thing and what I need to figure out is that when using the logstash 'file' input filter everything worked fine - it was only when I switched to using filebeat to harvest the log files that I started seeing the problem.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.