There's definitely differences in the structure of memory usage between 6.x and 7.x that could account for the difference in behaviour you're seeing. But Elasticsearch is still using (much) less than the expected limit of 8GB of memory when it's killed.
There's other weirdness in the kernel logs too:
[13431.218115] kworker/1:1 invoked oom-killer: gfp_mask=0x6200c2(GFP_HIGHUSER), nodemask=(null), order=0, oom_score_adj=0
order=0 means the failed allocation is a single 4kB page, but ...
[13431.218193] Node 0 DMA: 1*4kB (U) 1*8kB (U) 1*16kB (U) 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15900kB
[13431.218201] Node 0 DMA32: 262*4kB (UME) 229*8kB (UME) 202*16kB (ME) 177*32kB (UME) 159*64kB (UME) 90*128kB (UME) 39*256kB (UME) 3*512kB (UM) 0*1024kB 0*2048kB 0*4096kB = 44992kB
[13431.218209] Node 0 Normal: 1791*4kB (MEH) 1171*8kB (UMEH) 703*16kB (UMEH) 281*32kB (UME) 81*64kB (UM) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 41956kB
... all areas have enough free space to satisfy that.
This StackOverflow answer is consistent with your log indicating
free < min in the
Normal zone ...
[13431.218188] Node 0 Normal free:41956kB min:42192kB low:52740kB high:63288kB active_anon:62984kB inactive_anon:8384kB active_file:76kB inactive_file:100kB unevictable:4651284kB writepending:16kB present:5242880kB managed:5085632kB mlocked:4651284kB kernel_stack:2576kB pagetables:13724kB bounce:0kB free_pcp:4kB local_pcp:4kB free_cma:0kB
... and indicates a known kernel bug that could cause this. What kernel are you using and is it affected?