Limit exceeded message needs some analysis

Hello

I am getting a lot of these debug messages in Elastic 5.4 setup. Surely this is not up to a 1000. Is there anything I can do to limit this?

curl -s -XGET http://1localhost:8089/logstash-2017.07.24/_mapping?pretty | grep type | wc -l
307
I don't get this :disappointed:

[2017-07-27T09:46:02,606][DEBUG][o.e.a.b.TransportShardBulkAction] [tZ-k4Bz] [logstash-2017.07.27][2] failed to execute bulk item (index) BulkShardRequest [[logstash-2017.07.27][2]] containing [index {[logstash-2017.07.27][pfactory-log][AV2DcFG-WcoJ2PrqU8ow], source[{"@timestamp":"2017-07-27T09:46:01.929Z","message":"2017-07-27T09:46:01,335 [INFO ][Client-Push:2:2][PriceRequestListener] Received PriceRequest: std_hdr { msg_type: "PriceRequest" sender_comp_id: "akka_external-2" target_comp_id: "PF" sending_time: 1501148761335 reply_to: "IR9CJb6mJkxRP5bGXqIUynq2v0o" } subscription_id: "6db7e5e9-6bc4-11e7-8f77-d89d672383e8" lease_period: 120000 product: FX_SPOT ccy_pair: "CHFMXN" base_ccy_dealt: true dealt_ccy_amt: "0" client_side: TWO_WAY source: "SX" dealing_method: ESP end_user_id: "user_stg" end_user_role: "SYSTEM" in_competition: UNKNOWN on_behalf: false from TumTransportContext{realm='nsp://localhost:9003', subject='pf-uat-ln4'} message id 3946151","@version":"1","path":"/data01/orion2/logs/pf-uat-ln4.log","host":"localhost.net","type":"pfactory-log","elastic_source_type":"pfactory-log","logpath":"/data01/orion2/logs/pf-uat-ln4.log","timestamp":"2017-07-27T09:46:01","tz":"335","loglevel":"INFO","Thread":"Client-Push:2:2","EventType":"PriceRequestListener","realm":"nsp://localhost:9003","PriceRequest":"std_hdr","msg_type":"PriceRequest","sender_comp_id":"akka_external-2","target_comp_id":"PF","sending_time":"1501148761335","reply_to":"IR9CJb6mJkxRP5bGXqIUynq2v0o","subscription_id":"6db7e5-6bc4-11e7-77-d89d672383e8","lease_period":"120000","product":"FX_SPOT","ccy_pair":"CHFMXN","base_ccy_dealt":"true","dealt_ccy_amt":"0","client_side":"TWO_WAY","source":"SX","dealing_method":"ESP","end_user_id":"user_stg","end_user_role":"SYSTEM","in_competition":"UNKNOWN","on_behalf":"false","subject":"pf-uat-ln4"}]}]
java.lang.IllegalArgumentException: Limit of total fields [1000] in index [logstash-2017.07.27] has been exceeded
** at org.elasticsearch.index.mapper.MapperService.checkTotalFieldsLimit(MapperService.java:593)** ~[elasticsearch-5.4.2.jar:5.4.2]

What do you get if you look at the mappings for the index Elasticsearch is complaining about (logstash-2017.07.27)?

Sorry

The list is too long and I don't have access to pastebin. It's 4901 lines

What do you get if you run your grep on it?

[user@localhost logs]$ curl -s -XGET http://localhost:9200/logstash-2017.07.27/_mapping?pretty | grep type | wc -l
863

Still not a thousand. What do you make of it?

Do you have something that generates lots of dynamic field names? Do you use a lot of different types that you generate on the fly?

Ok, I'm running a kv filter through the unstructured data and generating some fields. They are limited but I suppose I may have to tone it down some more.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.