hi
How to make the default elastic search field limit setting as more than 1000, for any or all index
Tried adding the below line in elasticsearch.yml - but elastic search was unable to start, hope this is not the right way.
index.mapping.total_fields.limit: 60000
Can anyone pls suggest how to go about this ? , when trying to upload via logtash/filebeat or bulk upload we get this error and some data gets missed to upload.
You can set this through an index template. That does however seem like an awfully large value. Given that the default value is there for a reason, increasing it that much might cause problems, especially if it is applied to a large number of indices. Why do you need to set it so large?
My use case is to analyse the network packet by storing it in elastic- similar to above link, to but I am grouping them with index based on filename, So creating multiple template for each for each file does not seem feasible.
Now I am using logtash to upload, Is there way to specify the index field limit in logtash.conf , if default global elastic setting is not possible ?
current logtash.conf:
input {
file {
path => "C:/logtash/*"
start_position => "beginning"
}
}
filter {
# Drop Elasticsearch Bulk API control lines
if ([message] =~ "{"index") {
drop {}
}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.