I've read that the type field in ES 6.0.0 is depricated and that it is planed to remove it in 7.0.0.
Did I catch this correctly?
If so, I've got a question about how to understand the new paradigm / the thoughts behind it to find the best way to get conform with it over the time.
Currently we have multiple logfiles (apache, tomcat, tuxedo, different proprietary applications).
I understood each logfile of each application as different types (apache_access, apache_error, ulog, application_1, application_2, ...)
For each type I set the documentType (?) in filebeat which comes as type filed into logstash. There based on the type filed I use different filters parse the logs and maybe do some ruby calculations.
Then I insert keep the type by logfile an ship it to ES and use the type.keyword field in kibana to search for the needed logs (of course added by other filters).
So If I do not have the type field anymore, what should I change in future? Just renaming the type filed to "logtype"?
Or can anything stay as it is, but some api will no longer be available (e.g. _search request via curl, where I currently have the possibility to filter by index and type in url)?
I just found the announcement, but no deeper details for me as user / administrator.
I updated to 6.0.1 in my dev environment, coming from 5.1.2.
I updated elasticsearch, kibana and logstash. Currently I did not update filebeat.
Filebeat is used to push data against logstash which is parsing and pushing to elasticsearch. So no direct way from filebeat to es in my usecase.
Now I get the following error in logstash:
[2017-12-11T15:09:49,343][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"tux-prod-2017.12.11", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x75772663], :response=>{"index"=>{"_index"=>"tux-prod-2017.12.11", "_type"=>"doc", "_id"=>"hyvpRWABwH9ho5f3DagR", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"Rejecting mapping update to [tux-prod-2017.12.11] as the final mapping would have more than 1 type: [rdautoorders, doc]"}}}}
So, after indexing I need to filter for my logfile types which declare from which application they are coming.
If I now can only use 1 type per index, and you guys are always telling me I have too small indices with too many shards, then please give me some lecture and advice how the new paradigm is working. Do I need to rename the type-field to "logfileType" as normal field and filter for that? If so, How can I migrate all my old indices?
ok, there is some migration guideline in this link, but I have following problem.
Seems if we ran our elastic stack a bit ugly.
We did not defined the types explicitly, they had been just created on occurrence by logstash. Then in kibana I just refreshed the mapping and that's it.
Is there a good way to export a merged mapping configuration for a type in ES 6.0.1?
Do I need to set the fields explicitly before reindexing process, or can I stay on the unclean way and start the reindexing without putting the mapping to the index?
We have a retention time of 40 days for our data. In our queries via kibana I query only the needed index type which is holding a set of types and then query for the type. I am thinking about a smooth migration process where the migration is done by itself after 40 days like:
changing logstash to put single type index.
for temporary backwards compatibility I think I need to add a new field "oldType" which holds the string of the old type field.
changing queries to query all indices and to filter for "(type.keyword: mytype OR oldType.keyword: mytype)" instead of "type.keyword: mytype"
When the migration the oldType field is not necessary any longer. And after all old indices are gone I could change to query only the explicit target index again.
Do I have any issues in my thoughts?
Would I have much performance degradation because of searching in all index types instead of only a subset?
Is there a good way to export a merged mapping configuration for a type in ES 6.0.1?
You could put mappings one after the other to your 6.x index, it will merge them and complain if there are incompatibilities.
Do I need to set the fields explicitly before reindexing process, or can I stay on the unclean way and start the reindexing without putting the mapping to the index?
You need to set fields first.
changing queries to query all indices and to filter for "(type.keyword: mytype OR oldType.keyword: mytype)" instead of "type.keyword: mytype"
This part confused me because you would query the document type with type, not _type.keyword? But other than that it makes sense.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.