Data Writing into WARM nodes along with HOT nodes

Hello Experts,

I noticed one thing while exploring HOT & WARM Architecture as per below blog.

This is my node config

Node-1
"cluster_name" : "elk-test",
"nodes" : {
"szZvzywsRuGCTEzXhLU2-Q" : {
"name" : "elk-1",
"transport_address" : "10.1.28.175:9300",
"host" : "10.1.28.175",
"ip" : "10.1.28.175",
"version" : "6.7.1",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "2f32220",
"total_indexing_buffer" : 421645516,
"roles" : [
"master",
"data",
"ingest"
],
"attributes" : {
"data" : "hot"
},
"settings" : {
"cluster" : {
"name" : "elk-test"
},
"node" : {
"attr" : {
"data" : "hot"
},
"name" : "elk-1"
},
"path" : {
"data" : [
"/opt/data/elastic",
"/home/elk/opt1/data/elastic_1"
],
Node-2
XZlIzGZkSe2BE_4LKNaOcQ" : {
"name" : "elk-2",
"transport_address" : "10.1.28.176:9300",
"host" : "10.1.28.176",
"ip" : "10.1.28.176",
"version" : "6.7.1",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "2f32220",
"total_indexing_buffer" : 421645516,
"roles" : [
"master",
"data",
"ingest"
],
"attributes" : {
"data" : "warm"
},
"settings" : {
"cluster" : {
"name" : "elk-test"
},
"node" : {
"attr" : {
"data" : "warm"
},
"name" : "elk-2"
},
"path" : {
"data" : [
"/opt/data/elastic",
"/home/elk/opt1/data/elastic_1"
],

My Template:

[elk@elk2 44SXSucURSKaGSVxmoq1Uw]$ cat /opt/logstash/config/templates/swift_proxy_log_sizing.json
{
"template": "swift_proxy_logs",
"index_patterns": ["swift-proxy-logs-*"],
"settings": {
"index.routing.allocation.require.box_type": "hot",
"index.refresh_interval": "5s",
"index.codec": "best_compression",
"number_of_shards": 5,
"number_of_replicas": 0
},
"aliases": {
"swift_proxy_log_write_alias": {}
}
}
questions:

1. I am writing into HOT node index swift-proxy-logs- which is elk-1 as per my template config but I see data is writing into WARM node as well, anything I am doing wrong here?*

HOT Node: elk-1
[root@elk1 44SXSucURSKaGSVxmoq1Uw]# ll
total 0
drwxrwxr-x. 5 elk elk 46 May 9 13:05 0
drwxrwxr-x. 5 elk elk 46 May 9 13:05 2
drwxrwxr-x. 2 elk elk 23 May 9 13:06 _state
[root@elk1 44SXSucURSKaGSVxmoq1Uw]# cd /home/elk/opt1/data/elastic_1/nodes/0/indices/44SXSucURSKaGSVxmoq1Uw
[root@elk1 44SXSucURSKaGSVxmoq1Uw]# ll
total 0
drwxrwxr-x. 5 elk elk 46 May 9 13:05 4
drwxrwxr-x. 2 elk elk 23 May 9 13:06 _state
[root@elk1 44SXSucURSKaGSVxmoq1Uw]# du -sh /opt/data/elastic/nodes/0/indices/44SXSucURSKaGSVxmoq1Uw;du -sh /home/elk/opt1/data/elastic_1/nodes/0/indices/44SXSucURSKaGSVxmoq1Uw
1.6G /opt/data/elastic/nodes/0/indices/44SXSucURSKaGSVxmoq1Uw
785M /home/elk/opt1/data/elastic_1/nodes/0/indices/44SXSucURSKaGSVxmoq1Uw

WARM Node: elk-2
[elk@elk2 44SXSucURSKaGSVxmoq1Uw] ll total 0 drwxrwxr-x. 5 elk elk 46 May 9 13:05 1 drwxrwxr-x. 2 elk elk 23 May 9 13:06 _state [elk@elk2 44SXSucURSKaGSVxmoq1Uw] cd /home/elk/opt1/data/elastic_1/nodes/0/indices/44SXSucURSKaGSVxmoq1Uw
[elk@elk2 44SXSucURSKaGSVxmoq1Uw] ll total 0 drwxrwxr-x. 5 elk elk 46 May 9 13:05 3 drwxrwxr-x. 2 elk elk 23 May 9 13:06 _state [elk@elk2 44SXSucURSKaGSVxmoq1Uw] du -sh /opt/data/elastic/nodes/0/indices/44SXSucURSKaGSVxmoq1Uw;du -sh /home/elk/opt1/data/elastic_1/nodes/0/indices/44SXSucURSKaGSVxmoq1Uw
785M /opt/data/elastic/nodes/0/indices/44SXSucURSKaGSVxmoq1Uw
785M /home/elk/opt1/data/elastic_1/nodes/0/indices/44SXSucURSKaGSVxmoq1Uw

1. if you add up all the above index shards size its coming around 3 GB but cat index showing only 1.8 GB, why?
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open swift-proxy-logs-2019.05.09-1 44SXSucURSKaGSVxmoq1Uw 5 0 3357698 0 1.8gb 1.8gb

Thanks
Chandra

Just update on my index size - 2nd question.

Now I see index size matching with shard size, may be it was doing compression in the back-end but GET query was giving index size post compression.

Thanks
Chandra

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.