Hi Fedele, sorry for the really late response here. I'm working on a solution for this problem and I'm wondering if you can share a bit more hints about your environment?
I already identified the situation, is due to a concurrency situation inside the jruby code. I will be releasing a fix in the next days, but in the meanwhile I'm curious for two things:
Do you use config management? if yes, is this config management rewriting somehow the host file?
Do you have an uncommon host file? can you share it might be?
Sorry, What do you mean with "config management"? Host file == /etc/hosts?
My ELK configuration is:
3 nodes (1 client and 2 master) on same ESXi.
On master nodes I have 2 redis servers e 2 logststash indexer
I use this configuration to be online/running when I do the updates.
All documents go into these redis servers and 2 logstash indexer (with same configuration) take from redis servers and put in elasticsearch with client node. A snippet:
Hi,
thanks for your quick answer, I mean things like puppet, chef, ansible, etc. The actual source of the problem is related with the /etc/hosts file, so I'm wondering if this could be caused by external system like one of this.
It would be really nice to validate that the fix is properly working for everyone, having this reproduced in LS alone is being tricky. I will do proper explanation of what you can do tomorrow morning, thanks a lot!
Hi @purbon. I upgraded to logstash 2.3.4 where there isn't dns plugin, and I didn't have issues.
If you have a path, I can test it on my system. I have about 10 milions of logs for day that use this plugin, so if there is a problem I can see it very quickly.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.