Hello,
For some time I have been occassionally seeing Kibana have an error 500 Internal Server Error. Recently it seems to happen more often. I have tried making changes from suggestions for other threads in this communicty, but they just completely break Kibana or Elasticsearch. Often if I go away and comeback Kibana is fine for awhile.
I believe that the issue is a lack of Memory, but maybe it is a config issue. This is the first ELK Stack I have setup up. It is a single physical server.
There is plenty of diskspace.
8 Gigs of Ram
8 CPUs 2.4GHz
OS Centos 8
ELK 7.6.0-1
Top Usually looks like this
top - 18:23:43 up 30 days, 4:05, 1 user, load average: 3.15, 2.98, 2.79
Tasks: 284 total, 1 running, 283 sleeping, 0 stopped, 0 zombie
%Cpu(s): 26.0/0.5 27[|||||||||||||| ]
MiB Mem : 7767.0 total, 225.4 free, 5900.9 used, 1640.8 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 1118.0 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
24935 elastic+ 20 0 240.5g 3.9g 332040 S 205.6 51.3 251:30.56 java
22750 logstash 39 19 6422492 1.1g 9592 S 2.3 14.0 9:13.04 java
25259 kibana 20 0 1747056 369192 10608 S 1.0 4.6 2:13.65 node
Kibana Error
{"statusCode":500,"error":"Internal Server Error","message":"[parent] Data too large, data for [<http_request>] would be [3127549290/2.9gb], which is larger than the limit of [2993920409/2.7gb], real usage: [3127548776/2.9gb], new bytes reserved: [514/514b], usages [request=0/0b, fielddata=21638/21.1kb, in_flight_requests=514/514b, accounting=599882168/572mb]: [circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [3127549290/2.9gb], which is larger than the limit of [2993920409/2.7gb], real usage: [3127548776/2.9gb], new bytes reserved: [514/514b], usages [request=0/0b, fielddata=21638/21.1kb, in_flight_requests=514/514b, accounting=599882168/572mb], with { bytes_wanted=3127549290 & bytes_limit=2993920409 & durability=\"PERMANENT\" }"}
An internal server error occurred
Version: 7.6.0
Build: 29000
Error: Internal Server Error
at fetchResponse$ (http://192.168.240.8:5601/bundles/commons.bundle.js:3:2991052)
at s (http://192.168.240.8:5601/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:338:774546)
at Generator._invoke (http://192.168.240.8:5601/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:338:774299)
at Generator.forEach.e.<computed> [as next] (http://192.168.240.8:5601/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:338:774903)
at s (http://192.168.240.8:5601/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:338:774546)
at t (http://192.168.240.8:5601/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:338:775041)
at http://192.168.240.8:5601/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:338:775191
jvm.options
## GC configuration
8-13:-XX:+UseConcMarkSweepGC
8-13:-XX:CMSInitiatingOccupancyFraction=75
8-13:-XX:+UseCMSInitiatingOccupancyOnly
## G1GC Configuration
14-:-XX:+UseG1GC
14-:-XX:G1ReservePercent=25
14-:-XX:InitiatingHeapOccupancyPercent=30
## JVM temporary directory
-Djava.io.tmpdir=${ES_TMPDIR}
## heap dumps
-XX:+HeapDumpOnOutOfMemoryError
# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=/var/lib/elasticsearch
# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log
## JDK 8 GC logging
8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:/var/log/elasticsearch/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m
# JDK 9+ GC logging
9-:-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m
I have tried 4Gs of heap space.