The source volume is 2.46 TB
ands once its index the store size is 1.1 TB (1 set primary shards and 1 replica)
My Mapping is
"mappings" : {
"doc" : {
"_size" : {
"enabled" : true
},
"properties" : {
"@timestamp" : {
"type" : "date"
},
"@version" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"hostname" : {
"type" : "text",
"analyzer" : "pattern"
},
"log" : {
"properties" : {
"flags" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
},
"log_level" : {
"type" : "text",
"analyzer" : "pattern"
},
"message" : {
"type" : "text",
"analyzer" : "pattern"
},
"package" : {
"type" : "text",
"analyzer" : "pattern"
},
"source" : {
"type" : "text",
"analyzer" : "pattern"
},
"tags" : {
"type" : "text",
"analyzer" : "pattern"
},
"thread" : {
"type" : "text",
"analyzer" : "pattern"
}
}
}
}
and I dont know but I didnt see any change in space after force merge, may i did something wrong. I thought of taking up next. But for the sake of completion I am pasting the output here
I had run
POST /index_name/_forcemerge
It ran and provided an output as
{
"_shards" : {
"total" : 32,
"successful" : 32,
"failed" : 0
}
}
green open index_name APP-cEzLR-i4vrhoFyL20Q 16 1 6797306863 0 1.1tb 605.2gb
The output for GET /_cat/nodes?v is
10.240.0.26 5 98 9 0.01 0.09 0.08 i - coord4
10.240.0.132 48 98 30 2.10 2.19 2.42 mdi - node3
10.240.0.27 6 98 4 0.05 0.11 0.09 i - coord3
10.240.0.29 4 98 12 0.46 0.45 0.36 i - coord2
10.240.0.134 63 98 29 2.71 2.68 2.64 di - node5
10.240.0.28 11 98 23 0.57 0.40 0.36 i - coord1
10.240.0.12 21 98 25 1.71 2.11 2.25 mdi * node2
10.240.0.136 44 98 25 2.84 2.59 2.58 di - node7
10.240.0.135 34 96 28 3.21 2.81 2.73 di - node6
10.240.0.133 40 98 26 2.62 2.60 2.68 di - node4
10.240.0.11 72 97 34 2.72 2.48 2.39 mdi - node1
10.240.0.30 5 98 18 0.01 0.05 0.05 mi - master
The output for GET /_cat/indices?v
green open index_name APP-cEzLR-i4vrhoFyL20Q 16 1 6797306863 0 1.1tb 605.2gb
The output of GET /_cat/shards?v is
index_name 14 p STARTED 424166345 37.8gb 10.240.0.133 node4
index_name 14 r STARTED 424166414 37.7gb 10.240.0.135 node6
index_name 4 p STARTED 424171600 37.8gb 10.240.0.132 node3
index_name 4 r STARTED 424171083 37.7gb 10.240.0.136 node7
index_name 12 p STARTED 424184503 37.7gb 10.240.0.136 node7
index_name 12 r STARTED 424184511 37.7gb 10.240.0.11 node1
index_name 8 r STARTED 424192628 37.7gb 10.240.0.133 node4
index_name 8 p STARTED 424192671 37.9gb 10.240.0.135 node6
index_name 13 r STARTED 424165421 37.7gb 10.240.0.132 node3
index_name 13 p STARTED 424164957 37.8gb 10.240.0.11 node1
index_name 10 r STARTED 424182004 37.7gb 10.240.0.132 node3
index_name 10 p STARTED 424181156 37.8gb 10.240.0.12 node2
index_name 5 p STARTED 424184124 37.7gb 10.240.0.136 node7
index_name 5 r STARTED 424184124 37.6gb 10.240.0.11 node1
index_name 2 p STARTED 424158150 37.5gb 10.240.0.134 node5
index_name 2 r STARTED 424157772 37.4gb 10.240.0.132 node3
index_name 6 r STARTED 424162765 37.6gb 10.240.0.134 node5
index_name 6 p STARTED 424161952 37.7gb 10.240.0.11 node1
index_name 1 p STARTED 424148057 37.9gb 10.240.0.135 node6
index_name 1 r STARTED 424148503 37.8gb 10.240.0.12 node2
index_name 7 p STARTED 424234489 37.9gb 10.240.0.133 node4
index_name 7 r STARTED 424233599 39gb 10.240.0.12 node2
index_name 9 r STARTED 424193478 37.8gb 10.240.0.135 node6
index_name 9 p STARTED 424193779 37.7gb 10.240.0.134 node5
index_name 15 p STARTED 424190986 37.6gb 10.240.0.135 node6
index_name 15 r STARTED 424191265 37.6gb 10.240.0.134 node5
index_name 3 r STARTED 424224518 37.7gb 10.240.0.133 node4
index_name 3 p STARTED 424223699 37.8gb 10.240.0.12 node2
index_name 11 p STARTED 424207929 37.7gb 10.240.0.132 node3
index_name 11 r STARTED 424207417 37.7gb 10.240.0.136 node7
index_name 0 p STARTED 424158289 37.7gb 10.240.0.133 node4
index_name 0 r STARTED 424158641 37.7gb 10.240.0.134 node5
I do have few other indices as well but this is the one that is of major concern