Getting Maximum Bytes length Exception in Logstash Indexer

Hey,
I am continuously getting this error on the logstash indexer, I don't know how to resolve this. please, can somebody please help me out how to get this resolved. I have 3Millions messages waiting in the RabbitMQ waiting.

"reason"=>"Document contains at least one immense term in field="message" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[77, 101, 115, 115, 97, 103, 101, 32, 98, 111, 100, 121, 58, 32, 123, 34, 109, 105, 109, 101, 95, 116, 121, 112, 101, 34, 58, 34, 97, 112]...', original message: bytes can be at most 32766 in length; got 93268", "caused_by"=>{"type"=>"max_bytes_length_exceeded_exception", "reason"=>"max_bytes_length_exceeded_exception: bytes can be at most 32766 in length; got 93268"}}}}, :level=>:warn}

Thanks

Sounds like you have something that exceeds 32,766 bytes. Change the value from Bytes to Float, that should give much more room for larger numbers. Then you can set it back to bytes when it gets to elasticsearch.

Change the value from Bytes to Float, that should give much more room for larger numbers. Then you can set it back to bytes when it gets to elasticsearch.

how to do this? I don't know how to do it.

Thanks

It would be helpful if you posted your filter code. The easiest way would be to add it into the syntax.

For example, if you are collecting into a filter like this:

%{NUMBER:bytes}

You can change it to:

%{BASE16FLOAT:bytes}

Here is a list of the default grok patterns.
Grok Patterns

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.