Error inserting data

Since the upgrade to 6.0.0, we see this error:

java.lang.IllegalArgumentException: Rejecting mapping update to [logstash-2017.11.22] as the final mapping would have more than 1 type: [logfile, syslog]

Is "type" een reserved field which can have only 1 value in an index?

The record that fails:

{"path":"/xxx/http_access.log","@timestamp":"2017-11-22T12:03:40.149Z",
"log_indexer":"yyy","log_source":"washttpaccess","@version":"1","host":"xxx","log_shipper":"xxx",
"message":"10.aa.bb.41 - [22/Nov/2017:13:03:39 +0100] HEAD /dddd HTTP/1.1 401 - 0 - -\r","type":"logfile","tags":["kafka"]}]}]

Probably look at the template. Probably there is a template for both types which is not supported anymore.

1 Like

I'm seeing something issues with our update. It looks like overnight is when our events stopped processing (matching up with a new index) We see the following elasticsearch error:

[ciscoswitch-2017.11.22][3] failed to execute bulk item (index) BulkShardRequest [[ciscoswitch-2017.11.22][3]] containing [index {[ciscoswitch-2017.11.22][doc][94r45F8BPUaikWKCqLiK], source[{"cisco_type":"LINK-3-UPDOWN","message_type_id":"187","description":"Interface","message":"<187>707219: Nov 22 12:23:42.202 CST: %LINK-3-UPDOWN: Interface GigabitEthernet1/0/15, changed state to down","interface":"GigabitEthernet1/0/15","tags":["cisco-switch","cisco-switch-processed","30-filter-ciscoSwitch"],"cisco_time":"Nov 22 12:23:42.202","@timestamp":"2017-11-22T18:23:42.210Z","event_no":"707219","@version":"1","host":"192.168.52.4","eventTime":"2017-11-22T18:23:42.202Z","status":"changed state to down"}]}]
java.lang.IllegalArgumentException: Rejecting mapping update to [ciscoswitch-2017.11.22] as the final mapping would have more than 1 type: [cisco-switch, doc]

But nowhere are we defining the doc type within our config. Also the type "cisco-switch" has been shifted to tag but it still looks like it is lingering from somewhere.

Is there a way to prevent the new doc type from being automatically added to events in v6?

I think that doc is the new default value when you don't explicitly define a type.
Is your data coming from logstash?

What gives GET /_template on your system?

elastic support came with this addition to the elasticsearch-output statement in the logstash config:

document_type => doc

The setting will get active for new created indices.

It solved my issue in the test environment, production is waiting for an index rollover.

I encountered the same problem, I tried your solution, but it seems no effect, can be described in more detail please?,thanks。

The issue pertaining to a few of our indices resolved itself when the index rolled over the following day, but we are still seeing this with data originating via filebeat.

Our data originates on a remote machine and is shipped using filebeat into logstash with a beats input. The filebeat prospector is pretty basic and follows the documentation online, aside from the path:

filebeat.prospectors:
- type: log
  paths:
    - /Library/Log/login.log

In our logstash dead letter queue we see the following:

[����MJ72017-11-27T15:54:06.683Z�dMETA�dbeathfilebeatjip_addressn192.168.43.207dtypeclog�dDATA�foffsetZ�bipn192.168.43.207jinput_typeclogfsourcex/Library/Logs/login.loggmessagex=LOGIN,USERNAME,Mon Nov 27 09:53:35 CST 2017,192.168.43.207dtypeclogdtags�hmacLoginxbeats_input_codec_plain_appliedx19-filter-computerLoginsj@timestampx2017-11-27T15:53:35.000Zkreceived_atx2017-11-27T15:53:43.095Zh@versiona1dbeat�dnamelC2017025-L04hhostnameilocalhostgversione5.6.3�dhostilocalhostieventTimexMon Nov 27 09:53:35 CST 2017factioneLOGINelasticsearch@388aa477a7790c221ae6013b8ba2d79913e2a3b659e75668880e12ccc8f7f1ee�Could not index event to Elasticsearch. status: 400, action: ["index", {:_id=>nil, :_index=>"login-2017.11.27", :_type=>"log", :_routing=>nil}, #<LogStash::Event:0x2fbf0773>], response: {"index"=>{"_index"=>"login-2017.11.27", "_type"=>"log", "_id"=>"JLUv_l8BJvIoQpoCY3Zl", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"Rejecting mapping update to [login-2017.11.27] as the final mapping would have more than 1 type: [log, type1]"}}}

If I look at our index template:

{
  "login" : {
    "order" : 0,
    "index_patterns" : [
      "login-*"
    ],
    "settings" : {
      "index" : {
        "number_of_shards" : "1"
      }
    },
    "mappings" : {
      "type1" : {
        "_source" : {
          "enabled" : false
        },
        "properties" : {
          "host_name" : {
            "type" : "keyword"
          },
          "eventTime" : {
            "type" : "date",
            "format" : "EEE MMM dd HH:mm:ss z YYYY"
          }
        }
      }
    },
    "aliases" : { }
  }
}

Would I just need to replace type1 in the index template mappings with log for the event to properly store in the index? I am working toward removing types and working on tags within our configuration to be more future proof.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.