I am using client node for indexing the data and 1 master and 2 data node . I have around 15 indexes available each of them having 5 shards and 1 replica
Thanks David
I have checked the ulimit and it is applied correctly..
(How to figure it out with node info api )
So when large number of conf file is running simultaneously, master is going down.
Is it something that each transaction in logstash to es treated as opening a file ?
What if I will run all the conf file and send the output to Kafka and then a centralize logstash will get the data from Kafka and will send to es ? Would it be a better solution ?
when I run all the conf file and send the output to Kafka and then a centralize logstash will get the data from Kafka and will send to es . It is running fine .
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.