Issues with Bulk Sending data

I am having a weird issue. I have a index template setup and I created an index. I tried sending data from dev inside KIBANA and it worked. One document went in. But when I try to send bulk through my code using python I get this error. Any ideas whats going wrong?

java.lang.IllegalArgumentException: Rejecting mapping update to [xxxxxxxxxxxxxx] as the final mapping would have more than 1 type: [_doc, xxxxxxxx]
        at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest( ~[elasticsearch-7.6.0.jar:7.6.0]
        at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute( ~[elasticsearch-7.6.0.jar:7.6.0]
        at org.elasticsearch.cluster.service.MasterService.executeTasks( ~[elasticsearch-7.6.0.jar:7.6.0]
        at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs( ~[elasticsearch-7.6.0.jar:7.6.0]
        at org.elasticsearch.cluster.service.MasterService.runTasks( [elasticsearch-7.6.0.jar:7.6.0]
        at org.elasticsearch.cluster.service.MasterService.access$000( [elasticsearch-7.6.0.jar:7.6.0]
        at org.elasticsearch.cluster.service.MasterService$ [elasticsearch-7.6.0.jar:7.6.0]
        at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed( [elasticsearch-7.6.0.jar:7.6.0]
        at org.elasticsearch.cluster.service.TaskBatcher$ [elasticsearch-7.6.0.jar:7.6.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ [elasticsearch-7.6.0.jar:7.6.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean( [elasticsearch-7.6.0.jar:7.6.0]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$ [elasticsearch-7.6.0.jar:7.6.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:?]
        at java.util.concurrent.ThreadPoolExecutor$ [?:?]
        at [?:?]

Hard to tell without knowing exactly what you are doing. I can just explain what the error message tells:

Rejecting mapping update to [xxxxxxxxxxxxxx] as the final mapping would have more than 1 type: [_doc, xxxxxxxx]

It's like you're doing something like:

DELETE /xxxxxxxxxxxxxx
PUT /xxxxxxxxxxxxxx
PUT /xxxxxxxxxxxxxx/_doc/1
  "foo": "bar"
PUT /xxxxxxxxxxxxxx/xxxxxxxx/1
  "foo": "bar"

Which is wrong.

Thanks for the response. The error is coming in bulk ingest though so 100 documents go in Python directly. Oddly enough, if I send through log stash it goes in.

Sure but you still don't tell what exact steps you are doing, like your logstash config, your index templates or mappings.

AH! I have a template with all the values setup in Elastic. I create the index name* so that it takes the template. Then I generate a dataset in my application, convert the dataset to JSON and send it to Elastic. If I pick "one" of the document and send it from Kibana UI it works, when I send the bulk messages 500 at a time it craps out.

Unfortunately due to the sensitivity of the designs, I cant share the exact layout.

So I can't help more.

I can just tell that you need to check which document type is used.

When you say which document type is used? You mean in the actual bulk messages? Can you point me to the documentation section for what you are taking about so I can take a look?

It can be in bulk. It can be in logstash configuration. It can be in index templates.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.