Is there anyway to load logstash configuration dynamically using database or some filesystem. ?
The usecase is lets say i have multiple apaches where I am getting data from and I am getting data with their ips (each record has field 'ip' of apache server ip adress). We need to get host names for each ip and stamp on each record and send it to elasticsearch. But I have mapping of ip addresses to host names in my filesystem, So i need to read mapping from there dynamically and stamp host name on each record.
sounds to me like the shipper on the sending machine should stamp the events with the hostname if you don't want to DNS-resolve them all. it's not something that should be handled by the collector itself, IMHO, therefore should not require a change in its logstash config.
Thanks for prompt response. I think translate can solve my problem giving local file as an input to it. I will explore little bit more and will come back.
This is one of requirement like mapping hostname. There are few more where I want to map something with something, like tagging request with some api name eg:based on request I will add a field apiname where I should get this mapping from some database or filesystem. I think translate can solve my problem.
If you only have to update the config like hostname you can just make a
template config with keys and then run a sed command or other template
merger prior to starting up Logstash. I did that to inject hostname and
Cloud metadata on ephemeral hosts that used Logstash.
With translate plugin and I am using yml dictionary from a file. When I change this file translate plugin is not picking latest changes. Is there any way to reload dictionary automatically whenever it changes?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.