Logstash 5.1.2 -> old configuration cached?

Hi,

I have some weird issue with logstash 5.1.2 on RHEL 7.

I rolled out a new logstash filter for two logfiles. There first I made an mistake which I deployed. There I casted a string to integer by accident. The string should be kept.

Then I stopped Logstash again, corrected the filter and restarted logstash again. After that I deleted any docs for these types.

But now I find the field being empty. In logstash logs I get following exception:

[2018-02-05T15:28:44,110][WARN ][logstash.outputs.elasticsearch] Failed action. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"tux-prod-2018.02.05", :_type=>"vichandler_statistics", :_routing=>nil}, 2018-02-05T14:28:30.000Z LOGIPRODTUX11 %{message}], :response=>{"index"=>{"_index"=>"tux-prod-2018.02.05", "_type"=>"vichandler_statistics", "_id"=>"AWFmXn2YC4m7XMTfoDJV", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [serviceName]", "caused_by"=>{"type"=>"number_format_exception", "reason"=>"For input string: \"vic_handler\""}}}}}

In my config grok is just extracting the value, no parsing any longer for the field serviceName.

If I check the mapping with GET tux-prod-2018.02.05/vichandler_statistics/_mapping then I get following result fpr the serviceName:

 "serviceName": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }

What can I do to fix it? An additional restart of logstash did not help.
In my elastic-dev-environment everything works fine, but not on production :frowning:

This seems like an ElasticSearch issue, not a Logstash one.

Namely, you cannot change a mapping of an already existing index, and it persists in the index even if you delete any relevant documents.
If you have dynamic mapping enabled, the field gets mapped to the appropriate type of the first value ElasticSearch sees for that field.

Your dev environment probably works because it just so happened that the first value it received was a string and it created the correct mapping.

In order to correct that, you need to delete the index and recreate it (and supplying a mapping/template yourself is also a good idea to avoid future issues).
If you want to preserve already existing data, you can try and reindex them to another temporary index (either via the reindex plugin or a Logstash pipeline).

We are on daily index rotation. So from tomorrow on it should work as expected, right?

Is there a possibility to delete the mapping of a particular type from an existing index?
If I recreate the index, how can I copy / keep any data from the all other types within the index??

Yes, as long as you provide a template that specifies that specific field as the appropriate type, or the first value ES receives happens to be the right value type. I'd suggest having a look at templates.

As for the mappings, no, you can't delete a certain type from it or otherwise alter it in any way. The only way to alter a mapping is to delete and recreate the index.
If you want to retain your current data, you can:

  1. Reindex the current index to another temporary one (create the temp index and apply the appropriate mapping first),
  2. Delete the old index,
  3. Recreate the index with the correct mapping,
  4. Reindex the data from the temporary index back to the proper one.

thanks a lot.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.