We have a computer (C1) with elacticsearch and Kibana. Other computer (C2) with logstash installed.
Network computers send syslog messages to logstash. Logstash send messages to C1.These logs have some información: User access, equipment failures. etc....We keep this information for 6 months.
We need keep some syslog messages (those with facility 4 or 10) during 5 years, so I was thinking deploy extra computer (C3) with elacticsearch and kibana. C2 would send syslog messages (those with facility 4 or 10 facility 4 or 10) to C3 also.
When you use time-based indices, retention is managed by index. You could therefore simply create two separate indices (one per retention period) and have Logstash write events with facility 4 or 10 to the one with longer retention period. When doing this you probably want to adjust the time period each index type covers and e.g. use monthly indices for the index with longer retention period and a daily or weekly index for the other. You can easily do this within a single cluster.
Once you have parsed out the facility from the log message, create two separate Elasticsearch outputs and use conditionals to send events to the correct plugin based on the facility value.
I know create different outputs using conditionals...but i suppose that I must configure "index => " parameter...not ??? how should I configure it for créate index by week ??
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.