In my system, we have several log types/levels that are based on the name "log_type", e.g: "python_log", "cache_log", ... that are coming to the single log file "mylog.log"
I am wondering to do something like:
Has there some way to do that? any clue?
ps: I am working an POC to replace our current heka setup.
@pierhugues, @daved Do you guys have some clue?
So you want to set the
type field dynamically based on a field from the JSON object?
With Filebeat alone there isn't way to accomplish this because there is no processor for mutating the data (e.g. copy some field value to
type and append
_logs to the value).
You should be able to do this with an ingest node pipeline in Elasticsearch or with Logstash.
Thank you @andrewkroh, so can we assume that the ES-ingest mechanism is faster than Logstash?
I wouldn't assuming anything about performance without testing.
Setting up ingest node to do this will probably be simpler since I assume you are already delivering the data to ES. So you only need to
PUT a pipeline and add
pipeline to your prosector config.
Hi @andrewkroh, Thank you for your time! I owe you some beer!
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.