I'm very new to Logstash. I have access to read data from database, so I decided to pull data from a remote database to logstash. The data size will be minimum 1TB per day.
How can i do capacity planning in terms of storage/networking/cpu/memory?
What are the best practices I have to follow?
Is it possible to pull data TB's of data with one logstash instance ? or I need to setup clustering?. How clustering will work for pulling data from remote database to Logstash?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.