We're using elastic search mostly for logging, but we've started adding telemetry data along with some of the log entries. These additional fields have causing our field count to explode to be over a 1000. All of this extra telemetry data sits within unique nested objects for each action.
There are only 10 fields we ever query on for data, and never more than 1-2 at a time. For the the most part our installations are 1-2 nodes (4GB of Ram) and querying is very infrequent, it's mostly just indexing log entries. We collect the data from client systems and import it into another index to perform aggregates against the telemetry data, so this never happens on the client installs.
If we never query the bulk of the fields and we are simply using it for storage would increasing the number of fields to 2000 have a significant impact? what about10,000? From the description of limit it sounds like our workflow would avoid the draw backs from field increase.