hi, i'm very new to ES.. i've been using it for about a year.. but have ALOT to learn.
i have built a "system" (i use that term loosely) that sends a large amount of log data via filebeat to logstash for some enrichment, then on to elasticsearch for storage. I have a years worth of logs in indexes broken up by the projectname, type of log, jobsite, then yea, month for example:
filebeat-lookout-suricata-site1-2020-11
filebeat-lookout-suricata-site2-2020-11
filebeat-lookout-suricata-site3-2020-11
filebeat-lookout-p0f-site1-2020-11
filebeat-lookout-p0f-site2-2020-11
filebeat-lookout-p0f-site3-2020-11
after a year of learning, this is not the best approach.. from my reading and research.. i'd like to implement some ILM policies, and move all of this data from last year off my "hot" nodes onto my "warm" nodes.
my logstash conf for output is:
output {
if [type] != "P0f" or [type] != "Suricata" or [site] !="ids" {
elasticsearch {
hosts => ["192.168.1.60:9200"]
index => "filebeat-lookout-%{[site]}-%{+yyyy.MM}"
} #es
}#end if
if [type] == "P0f" {
elasticsearch {
hosts => ["192.168.1.60:9200"]
index => "filebeat-lookout-p0f-%{[site]}-%{+yyyy.MM}"
} #es
}#end if
if [type] == "Suricata" {
elasticsearch {
hosts => ["192.168.1.60:9200"]
index => "filebeat-lookout-suricata-%{[site]}-%{+yyyy.MM}"
} #es
}#end if
so its my understanding, i need to create new index templates, with the proper mapping and add an alias to them.
so i would have aliases like:
- lighthouse-p0f
- lighthouse-suricata
- lighthouse-main-logs
once i have these in place, i woudl set up my ILM rules:
HOT:
- enable rollover
- 50gb or 30 days
Warm: - 90 days
- node attribute: box_type:warm
- set replicas : 1
- strink : 9 shards
once i do that.. is where i get confused..
I will need to do two things:
- send all new data to the alias's, so they get the ILM rules
- reindex all old data so they follow the ILM rules?
how i do push all my old indexes through this?
thank you
Darrell