Hi, I am having problems when I try to read a log file, which has lines of more than 8000 characters in length each, is it possible that logstash has a limitation about this, or do I have to configure something to work correctly?
This is my match filter.
match => {"message" => "%{DATA:Fecha}\s%{DATA:Hora}\s%{DATA:Domain}\s%{LOGLEVEL:NivelLog}\s%{DATA:LoggerMessageProcessor}\s-\smessage.id:\s%{DATA:message}\s/\sRESPONSE HOST:\s%{GREEDYDATA:RESPONSE}.*"}
And this is a tipical line of log:
2019-04-30 10:41:22,982 [[banco_provincia_legacy_domain].HTTP_LISTENER_GENERAL.worker.1340] INFO org.mule.api.processor.LoggerMessageProcessor - message.id: a496f080-6b4d-11e9-b6d6-005056a84e5a / RESPONSE HOST: OK|44|LA PROPUESTA LA TIENE EN ESTE MOMENTO EL TERM
(for try this, add at the end of this lines 6000 spaces)
This is the config file, the error is that with large files doesn't log all lines, but if i remove the spaces at the end of each line it work fine, (i try this with the same file), is very is strange .
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.