High CPU usage / load average while no running queries

I think I did not wait enough time between the moment I stopped all server services except ES and the moment I checked the load average. When all my services are up, I have :

hot_threads:

::: {sHsHRsf}{sHsHRsflTtigKOko_FvnVg}{2IQyThMHQF23eoouaLsa7w}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=8165040128, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
   Hot threads at 2019-01-03T16:35:06.487, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:
   
   31.4% (156.8ms out of 500ms) cpu usage by thread 'elasticsearch[sHsHRsf][search][T#4]'
     3/10 snapshots sharing following 32 elements
       org.apache.lucene.index.TermContext.build(TermContext.java:99)
       org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:211)
       org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:751)
       org.elasticsearch.search.internal.ContextIndexSearcher.createWeight(ContextIndexSearcher.java:148)
       org.apache.lucene.search.BooleanWeight.<init>(BooleanWeight.java:54)
       org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:207)
       org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:751)
...
top - 17:38:22 up 41 days,  7:14,  1 user,  load average: 7.38, 7.13, 6.68
Tasks: 117 total,   3 running,  73 sleeping,   0 stopped,   0 zombie
%Cpu(s): 43.0 us, 53.8 sy,  0.0 ni,  0.3 id,  0.0 wa,  0.0 hi,  0.7 si,  2.2 st
KiB Mem :  7973672 total,   132904 free,  5420600 used,  2420168 buff/cache
KiB Swap:        0 total,        0 free,        0 used.  2142024 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                     
27523 elastic+  20   0 8124680 2.521g 1.205g S  81.4 33.2 730:12.18 java                                                                        
12240 postgres  20   0  328300 154316 149232 R  20.8  1.9  28:08.07 postgres                                                                    
10225 postgres  20   0  328796 155664 149860 R  19.5  2.0  50:05.82 postgres                                                                    
 9563 postgres  20   0  327540 154192 149624 S  18.2  1.9  56:18.46 postgres                                                                    
31575 me        20   0 1330936 558840   8400 S  11.6  7.0 378:39.91 ruby                                                                        
 9988 postgres  20   0  318268  24964  23024 S   6.9  0.3  12:02.92 postgres                                                                    
16391 me        20   0   44560   4024   3388 R   1.9  0.1   0:04.22 top                                                                         
 9951 www-data  20   0  172812  11504   6036 S   0.9  0.1  11:38.94 nginx

Postgres is probably the culprit. I will dig on this way. Thank you very much everyone for your time. I will update this thread with my solution once I get one—perhaps it will be PG optimization or a simple server upgrade.