We have moved to a distributed Linux/Apache Tomcat environment, and the
logs [apache, tomcat, applications, sys, etc] are killing me. We keep
talking about centralize logging but doesn't seem like an easy task. I've
been reading the docs on ELK, and I like what I see. What I'm still not
seeing is the overall architecture in a distributed system. So I have a
Logstash process on each of my server nodes? Then each of those nodes
parse and report back to a centralized Elasticsearch engine? Is there any
documentation that anyone could point me to get a better understanding?
So that is question 1. The second question is that we visualized a copy of
our production in our test environment. How can I keep the events separate
from our production and test environments?
We have moved to a distributed Linux/Apache Tomcat environment, and the
logs [apache, tomcat, applications, sys, etc] are killing me. We keep
talking about centralize logging but doesn't seem like an easy task. I've
been reading the docs on ELK, and I like what I see. What I'm still not
seeing is the overall architecture in a distributed system. So I have a
Logstash process on each of my server nodes? Then each of those nodes
parse and report back to a centralized Elasticsearch engine? Is there any
documentation that anyone could point me to get a better understanding?
So that is question 1. The second question is that we visualized a copy
of our production in our test environment. How can I keep the events
separate from our production and test environments?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.