The final mapping would have more than 1 type

I write a template and want to load it manually, but failed.
a message showed up :

{"error":
{"root_cause":[
{"type":"illegal_argument_exception","reason":"Rejecting mapping update to [OC22HHTNQxaFWq2g_XGKyA] as the final mapping would have more than 1 type: [t_df_dfxx, t_kd_grdfd]"}], 
"type":"illegal_argument_exception",
"reason":"Rejecting mapping update to [OC22HHTNQxaFWq2g_XGKyA] as the final mapping would have more than 1 type: [t_df_dfxx, t_kd_grdfd]"},"status":400}





 { 
"index_patterns": ["tes*"], 
"settings" : { 
"index" : { 
"number_of_shards" : 3, 
"number_of_replicas" : 2 
} 
}, 

"mappings" : { 
"t_df_dfxx" : { 
"properties" : { 
"ydaxj" : { "type" : "text" }, 
"nbddh" : { "type" : "text" }, 
"ajrds" : { "type" : "text" }, 

} 
}, 

"t_kd_grdfd" : { 
"properties" : { 
"kk" : { "type" : "text" }, 
"zjdd" : { "type" : "text" }, 
"yxjdf" : { "type" : "text" }, 

} 
} 
} 
} 

what I want to do is to mapping different log to different type like t_kd_grdfd and t_df_dfxx, would you mind tell me how to do the right thing?Thank you.

Elasticsearch 6.x only supports a single type in new indices, as described in this blog post.

Thank you for your replay. I have different log format with different names. For example, a.log,b.log,c.log....a,b,c.. have some relationship, actually, there are exported from a database.I want to build only one index and mapping,if I can say this word, them to different types.

What should I do in logstash and filebeat?I am totally confused by the removal of mapping.

Please provide examples to help us better understand the problem you are having. It is very hard to give any concrete advice based on the currently available information.

file names of my log(txt): 1_A.txt, 1_B.txt,1_C.txt, 2_A.txt, 2_B.txt,2_C.txt;
A ,B, C has different format (type).Let's say ,A's format is name,account,address;B's format is account, balance; C's format is account,status.

I put all txt files under one directory and collect them with filebeat and logstash into elasticsearch.
I will build a index named test, and put data(.txt)into it.
specifically, I want put all _A.txt into type a_type,_B.txt into b_type,
_C.txt into c_type.
because I use filebeat and logstash, I need to load template manually,so I need write and load a template.

I have read the document and understand put two document into a index,but I use filebeat and logstash,I am confused.

what should I do to accomplish it?
Thank you for your patient and help.

{
 "mappings": {
 "a_type": {
  "properties": {
    "name": { "type": "text" },
    "account": { "type": "keyword" },
    "address": { "type": "keyword" }
  }
},
"b_type": {
  "properties": {
    "account": { "type": "text" },
    "balance": { "type": "keyword" },
    
  }
,
"c_type": {
  "properties": {
    "account": { "type": "text" },
    "status": { "type": "keyword" },
    
  }
}
}

I write above code as my template to index test.but failed with wrong message "the final mapping would have more than 1 type." How does logstash know *_A.txt into a_type?
So confused about the removal of mappings

The mapping contains more than 1 type, which is not allowed. Can you please give concrete examples of documents of the different types and explain why you feel you need to use type instead of e.g. just adding a custom field to each document that indicates the 'type' (which you can then filter on)?

Thank you for you kindness.I am a beginner of Elasticsearch. My scenario is simple.
I have 26 kinds of txt files from 30 different organizations.Let's say 1_A.txt,2_A.txt,....30_A.txt;1_B.txt,2_B.txt,....1_Z.txt,2_Z.txt....30_Z.txt.
The line in these files above delimiter by ^A. for example, Smith^A56425887^A199.00^AFermont,CA^A^A20180101
What I want to do is harvest these 26*30 txt files with filbeat and ship them to Logstash
, using a simple filter mutate {split =>["message","^A"]} split every line into filed,eventually store them in Elasticsearch under a single index, let's say test.
I want to analysis these data, most are transaction data, and visualize them by kibana or do some machine learning or statistics using x-pack.

I use 'type' initially due to it is easy to understand that a index similar to a database and a type similar to a table. Furthermore, when I read the documents of filebeat and logstash,I found that if filebeat do not directly connect to Elasticsearch, I need to load the template manually,which means I need 26 types in my template, for example, type A, type B,....type Z.

when I test my template containing only 1 type, I can using filebeat to collect certain txt files, let's say *_A.txt,and simply filtered by logstash, the data are sent to Elasticsearch.Of course, I load the A's template manually ahead.

When I add one more type into my template, error occurs.Like you said, the mapping do not allowed containing more than 1 type.And I learn from the documents of Elasticsearch that the mapping will be completely removed in version 7.0.

Is there any good solution to my simple applications?

Your post mentioned that just adding a custom field to each document that indicates the 'type'.I think it is a way to solve my problem. But I am a new to filebeat, logstash and Elasticsearch,would you mind show me some example,especially the configuration files on filter.It will be great help if you tell me what knowledge I need to learn.

Thank you for your time.It is very pleasure to learn from you.

the examples of documents of the different types like this
1_A.TXT contains. Smith^A56425887^A199.00^AFermont,CA^A1^A20180101
format name^Aaccount^Abalance^Aaddress^Aorganization_id^Adate
Smith,56425887,199.00,Fermont,CA,1,20180101 will be my document
if I deal with them like database, I can search them from type A and A has a field name, so I know how to analyze .
2_A.TXT contains documents like : Bay^A32425887^A878.00^ASan,LA^A2^A20180101

I know this method not benefit from Elasticsearch. Would you please tell me how to tackle this ?

1_Z.TXT contains documents like : 1232322^A343.00^A12^Acredit card^A20180202^A1

If I add a custom field to each document that indicates the 'type'(actually, I want to know how to accomplish this), how about my filed name? It will influence my way to access data when I analysis.Thank you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.