I am doubt about lookups. Recently I implemented a pipeline using the filter plugin Memcached, but the Memcached plugin does not work well with json values. I needed to do this for work:
You can keep your keys updated with the translate filter as well, you need to point to a file with your external dictionary and set a refresh interval and logstash will update the values and keys if they change.
If you want an example, I made a blog post about the translate filter a couple of time ago.
About the performance, both are pretty fast, but since the translate filter looks for the key-value pairs in the memory of the logstash process, it will be a little faster, but it has some limitations about the size of the dictionary.
In this case I also made an old blog post about some differences between translate and memcached.
The main advantaged is that you can have larger dictionaries in memcached and multiple logstash can connect to it if needed.
If your memcached is in the same host of your logstash, you could also configure it to listen to a unix socket and configure the filter to connect to the unix socket, while this is not in the documentation, but it works. I made a PR to add this in the documentation, but it is still waiting a review from someone at elastic.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.