I am wondering does any one already configured the shipping of Squid Proxy logs directly to Elastic search via file beat ?if yes, can some one provide me the information to achieve this !
Thank you
I am wondering does any one already configured the shipping of Squid Proxy logs directly to Elastic search via file beat ?if yes, can some one provide me the information to achieve this !
Thank you
can i get any update on this post ? i really stuck at ingesting Squid logs via file beat !
We're about to head down the same path.
Seems like there are a LOT of people needing to do this: https://discuss.elastic.co/search?q=squid
Really, it should get together as a module, IMO.
I think we have to confirm if Squid is using CLF, in which case you could use another format.. ie, nginx (TBD) or apache. https://wiki.squid-cache.org/Features/LogFormat
Or, if it's using the Squid specific format, in which case it would be new, and probably handled as a ECS Mapping.
I see some direction here: https://github.com/elastic/ecs/issues/300, but no 'answer' around ECS Mapping. I think nginx has it's own fields, but the discussion went to using the 'observer' fields.
thanks Randy-312
I'm reviewing the links from: ECS - Squid proxy log normalization
While we're looking to map it in.
This is also interesting: https://github.com/molu8bits/squid-filebeat-kibana/blob/master/filebeat/etc/filebeat/squid-fields.yml which is ONLY the CLF, and not the full Squid format (10 fields).
This work will require us to map the squid. fields into a corresponding ECS one. I believe we'll do similar work to what was done for nginx, and the use of aliases. https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-nginx.html
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.