Re-index previous heartbeat data in ES through logstash processing

Hi all,
I know that the error described in this post has already been discussed but actually I could not find a solution yet.

What I have been trying to accomplish it to have Logstash 7.6.2 read a bunch of JSON documents generated by Heartbeat 7.6.2 that had been previously indexed by Elasticsearch 7.1.1, in order to re-index them back again into Elasticsearch, after some processing/filtering in the pipeline that also generates other data structures out of them.

To do so, my plan is the following:

  1. (DONE) dump the current Elasticsearch index content storing the Heartbeat documents and store all documents to a text file. Each line of this file is a JSON document identical to what originally sent by Heartbeat to Elasticsearch

  2. (DONE) delete the above Elasticsearch index in order to have it re-created by Logstash. However the index template is still there, as originally created by Heartbeat and it does declare a pattern and ILM like the following:

  "index_patterns": [
  "lifecycle": {
    "name": "heartbeat",
    "rollover_alias": "heartbeat-7.6.2"
  1. (ERROR) configure Logstash to parse the text file generate at step 1 and (re)send the Heartbeat documents to the same Elasticsearch index that I deleted.
    This is the output section I am using:
elasticsearch {
     ilm_rollover_alias => "heartbeat-7.6.2"
     ilm_policy => "heartbeat"
     hosts => ["http://es:9200"]

When I run Logstash, on Elasticsearch an index with name heartbeat-7.6.2-2020.07.10-000001 is created but no documents are indexed in it, due to the already discussed error:

[WARN ] 2020-07-10 17:53:21.605 [[main]>worker1] elasticsearch - Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"heartbeat-7.6.2", :routing=>nil, :_type=>"_doc"}, #LogStash::Event:0x3f883eed], :response=>{"index"=>{"_index"=>"heartbeat-7.6.2-2020.07.10-000001", "_type"=>"_doc", "_id"=>"bONuOXMBbbxZdk4rXwa1", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [host] tried to parse field [host] as object, but found a concrete value"}}}}

On the other hand, if I change the Logstash output configuration to

 elasticsearch {
     hosts => ["http://es:9200"]
     index => "heartbeat-7.6.2"

the above error no longer shows up and I get all documents fed to the "heartbeat-7.6.2" index on Elasticsearch. However, in this case I guess that no ILM would be applied and so no index rollover would ever take place on ES.

I'd appreciate any hint on this: what is wrong here?

I suggest you read this post and then this post.

Thank you @Badger, in the meanwhile I realized that it was actually easier than it first appeared.

I did not fully realize that after passing through Logstash each document is extended with the "host" and "path" fields, respectively to store the hostname where Logstash is running and the path of the input file read.

The mentioned error is avoid simply by removing such fields, thus re-obtaining the exact document structure as read from the input file!

This is what I added to the pipeline filter stage:

mutate { remove_field => [ "path", "host" ] }

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.