Output logs to host based on timestamp

I have logs that are a mix of old and new. I'd like to send any logs that are newer than 2 weeks to fast SSD, otherwise send to slower HDD. Can someone point me in the direction to figure this out?

Welcome to our community! :smiley:

What output are you using?

Thanks! I am outputting to elasticsearch on localhost:

    elasticsearch {
          index => "maximo-%{+YYYY.MM.dd}"
          hosts => "localhost:9200"
          document_id => "%{[@metadata][fingerprint]}"
    }

Ok, in that case this is not a Logstash thing, it's where Elasticsearch will be storing the indices.

You can use ILM to manage that.

Thanks for that. I looked into ILM and I am not sure how it can solve my problem. Does ILM provide a way to identify which disk to place the index on based on the timestamps of the entries or the name of the index?

The idea I was thinking of, but I am not sure if this is how it works, if I output to different elasticsearch nodes, so running a master on localhost:9200 with fast storage, and another instance on localhost:9201 with slow storage and using an if statement in the output to direct logs to the right place based on the timestamp of the entry. Would something like this work? I am just not sure how to stup the if statement to make a comparision of the timestamp of the log entry to the current time.

With ILM you can define "hot" data to go onto certain nodes, then as it ages it goes to "warm" and then moves to over nodes.

https://www.elastic.co/guide/en/elasticsearch/reference/7.9/ilm-allocate.html is where it goes into details on that.

What about data that is already old, does it just have to wait for the specified time before it goes to the warm storage? Isn't there a way to store the old data that is getting ingested the for the first time straight on to the warm storage?

You can set indices to be created on specific nodes using allocation filtering - https://www.elastic.co/guide/en/elasticsearch/reference/current/shard-allocation-filtering.html