I have four types of log files.
Every day I get the 4 types from several machines and, I have for each machine : t1-20200703, t2-20200703, t3-20200703 and t4-20200703.
After a few days I will have hundreds of files and I want to know how I can organize all this on elasticsearch? and on kibana?
I'm using logstash.
output{
elasticsearch{
hosts => ["localhost:9200"]
index => "%{type}%{+yyyy.MM.dd}"
}
}
There's not really enough info here to be helpful, but some general guidelines:
You need to avoid too many indices / shards. You should probably combine log files if they are for the same application and almost certainly combine logs from different hosts.
You should research ILM (index lifecycle management), it is the "new way" instead of date based indexes and automate management and deletion of old indices.
As @rugenl pointed out using ILM will be the better way of automating the process of index management on the Elastic side.
If you think logs from different machines are of similar structure, then you can store logs from different machines into same index. Probably add host information to the logs, so that you can filter on these fields to get logs from different machines. Checkout Elastic Common Schema to standardize naming conventions.
As you start using ILM, no need to specify the particular index name, you will be providing the alias name which will be a static name. Elasticsearch will automatically indexes into a date based index and will rollover automatically based on your ILM policy.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.