Field limit exceeded, Docs count too high

Hello Elastic-co,

We are currently using a full ride of your system, LogStash to Kibana to Elastic Search. However I have some questions,

1. Field Limit Exceeded:

I am randomly getting in my logs an error about the:

[2018-07-19T12:10:00,717][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"myindex", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x724d17fc>], :response=>{"index"=>{"_index"=>"myindex", "_type"=>"doc", "_id"=>"UmOFs2QBIfe17xrKFSEI", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"Limit of total fields [1000] in index [myindex] has been exceeded"}}}}

However when I visit the elastic search index and copy the entire mapping object, paste it into console and do a Object.Keys(myobject.doc.properties).length the return is only 723.

Where does this 1000 error come from? I am using managed templates and using :sql_last_value with a stored procedure.

2. Docs count no longer close to matching

When I left last night after running a single index, my docs count was within 20 records of the total number of results in my stored procedure. It was running for a few hours never exceeded that. When I turned on all 5 of my indexes and let it run over night the docs count has gone up by 943,000 when I only have a count in the stored procedure of 28,855. It is properly showing the last id in the logs of logstash, what would cause the docs count to sky rocket so much?

3. use_lowecase_columns

I havent been able to find this setting anywhere in the documentation, however I noticed during a config error yesterday in my log file it had this setting set to true. I didn't set this value and it might explain why my original tracking column of VisitorID wasn't working. I have since changed it to "id" and its working properly, but is this value changeable? I am using the jdbc connector to sql server.

Relevant Documents
mytestindex.conf (before compiled)

input {  
jdbc {
    jdbc_connection_string => "@connstring"
    jdbc_user => nil
    jdbc_driver_library => "@driverpath"
    jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
		schedule => "*/5 * * * *"
    statement => "EXEC GetMyTestIndex :sql_last_value"
		use_column_value => true
		tracking_column => "id"
		tracking_column_type => "numeric"
		id => "test_index_runner"
		record_last_run => true
}
}
filter{
	useragent {
		source => "browser"
		target => "user_agent"
		remove_field => "browser"
	}
}
output {  
elasticsearch { 
		hosts => "@awiip"
		user => "@awslogstashuser"
		password => "@awslogstashpw"
		index => "mytestindex"
		template => "@templatepath"
	}
}

Template:

{
  "template" : "logstash-*",
  "version" : 60001,
  "settings" : {
    "index.refresh_interval" : "5s",
	"index": {
      "mapping": {
        "total_fields": {
          "limit": "2000"
        }
      }
	}
  },
  "mappings" : {
    "_default_" : {
    }
  }
}

Answering only your first question about field limit exceeded.

Where does this 1000 error come from?

In this total number of fields, we also include meta fields. We have a github issue to clarify the documentation on total number of field. In this issue, you can also find some examples of these metafields.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.