Elasticsearch setting field index limit

(Dhivya) #1

How to make the default elastic search field limit setting as more than 1000, for any or all index
Tried adding the below line in elasticsearch.yml - but elastic search was unable to start, hope this is not the right way.
index.mapping.total_fields.limit: 60000

Can anyone pls suggest how to go about this ? , when trying to upload via logtash/filebeat or bulk upload we get this error and some data gets missed to upload.

(Christian Dahlqvist) #2

You can set this through an index template. That does however seem like an awfully large value. Given that the default value is there for a reason, increasing it that much might cause problems, especially if it is applied to a large number of indices. Why do you need to set it so large?

(Dhivya) #3

thank you, Is there an option without template - to make it default for any upload in elastic?

My use case is to analyse the network packet by storing it in elastic- similar to above link, to but I am grouping them with index based on filename, So creating multiple template for each for each file does not seem feasible.

Now I am using logtash to upload, Is there way to specify the index field limit in logtash.conf , if default global elastic setting is not possible ?

current logtash.conf:

input {
file {
path => "C:/logtash/*"
start_position => "beginning"
filter {
# Drop Elasticsearch Bulk API control lines
if ([message] =~ "{"index") {
drop {}

json {
    source => "message"
    remove_field => "message"
grok { match => [ "path", "/(?<filename>[^/]+).json" ] } 

output {
elasticsearch {
hosts => "localhost:9200"
document_type => "pcap_file"
index => "%{filename}"
manage_template => false

(Christian Dahlqvist) #4

You can create a template that matches any index name pattern.

(Dhivya) #5

Thank you, template helped to resolve this

(system) #6

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.