Hi,
I see that combine_partial
is not a parameter for the container input. Does the input now automatically handle docker's 16kb message limit?
Thx
D
Hi,
I see that combine_partial
is not a parameter for the container input. Does the input now automatically handle docker's 16kb message limit?
Thx
D
Is there anyone that can provide clarification on this?
Hey @dawiro,
Yes, it seems this feature slipped during the refactors for the container and filestream inputs. Are you finding issues with this?
I mean, the logic is there, and this behaviour is in theory maintained by default, but there doesn't seem to be any setting to disable it.
We're making changes to our multiline config and devs are claiming that separate log events are being combined, ie. that the multiline config isn't working. My answer is that, unknown to them, they're emitting newlines which are getting merged instead of being split out as they had been before.
I'm asking this question as due diligence to assess if something else could be wrong. Note: a complicating factor for us is that we don't have access to the raw container logs.
We do have fat messages that exceed the 16kb limit. So we do need the combine_partial functionality.
Indeed if the messages contain new lines at the end they are likely going to be merged by the logic to combine partial lines.
How are you defining the multiline configuration? Maybe you can give a try to the filestream
input, that has container and multiline parsers and gives a bit more of control on when each one is executed.
Atm, we''re using the container input with a separate multiline and json handling spec. If we use filestream will we still get docker/container metadata for log enrichment?
You can have enrichment if you use an autodiscover provider, or the add_docker_metadata
processor.
How are you enriching now?
We're using add_docker_metadata
. If we were to switch to filestream what would we lose relative to the continer input? Also, I presume the filestream input won't handle logs chopped up by docker.
Umm, not sure, I am afraid that it will do the same. If you could confirm that it neither work for your use case we can open an issue to recover this setting.
Is the handling of the 16kb limit part of the container input or part of beats itself in a general sense?
The 16kb limit is part of Docker. The container input joins lines that end up with newline, independently of their size.
So, the filestream input would not recombine messages chopped up by docker. That would create a different problem for us.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.