The index name needs to be somewhat unique in order for a new index to be created or your ilm policies need to be adjusted so that they rollover every 30 minutes.
The simplest way that I can think of is to put a timestamp in the index name in the logstash output.
output {
elasticsearch {
index => "logs-%{+YYYY.MM.dd.HH.mm}"
}
}
A word of caution, as this can lead to a lot of shards being created. Do this at your own risk.
I was able to make it work with what AquaX recommended above. Storage size of the index is roughly 450KB. I am using ILM to delete indices that are over 7 days old as that is what we require.
Can you please guide or provide a link on what you advise? especially the part requiring adding filename as a value inside an event.
450KB??!
Yeah... that's incredibly small. You are much better putting everything into a single index and then creating ILM rules to manage what gets deleted and when. The sweet spot for the shards to be no more then 50GB (ideally between 10GB and 50GB as per
This is probably one of the hardest parts in dealing with Elasticsearch (sharding and resource allocation) so there is lots of reading for you to do:
It seems that using Lens Line chart below only suffices my needs as it breaks down the display further with string serverName.keyword. Please see below for index created with hour and minutes (demo-csv-%{+YYYY-MM-dd_hh.mm}). This however creates too many shards:
I am not able to create the same if I take away hh.mm and use demo-csv-%{+YYYY.MM.dd} instead. This would of course create one index per day and the expectation is to plot data (same as shown in picture above) values based on 30 minutes time difference.
I was able to create a single index and it is now being rotated every 30 minutes through ILM. Indices are still being created every 30 minutes. Please see below:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.