ELK 7.6.0-1 Kibana

Hello,

For some time I have been occassionally seeing Kibana have an error 500 Internal Server Error. Recently it seems to happen more often. I have tried making changes from suggestions for other threads in this communicty, but they just completely break Kibana or Elasticsearch. Often if I go away and comeback Kibana is fine for awhile.
I believe that the issue is a lack of Memory, but maybe it is a config issue. This is the first ELK Stack I have setup up. It is a single physical server.

There is plenty of diskspace.

8 Gigs of Ram
8 CPUs 2.4GHz

OS Centos 8

ELK 7.6.0-1

Top Usually looks like this

top - 18:23:43 up 30 days,  4:05,  1 user,  load average: 3.15, 2.98, 2.79
Tasks: 284 total,   1 running, 283 sleeping,   0 stopped,   0 zombie
%Cpu(s):  26.0/0.5    27[||||||||||||||                                       ]
MiB Mem :   7767.0 total,    225.4 free,   5900.9 used,   1640.8 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   1118.0 avail Mem

 PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
24935 elastic+  20   0  240.5g   3.9g 332040 S 205.6  51.3 251:30.56 java
22750 logstash  39  19 6422492   1.1g   9592 S   2.3  14.0   9:13.04 java
25259 kibana    20   0 1747056 369192  10608 S   1.0   4.6   2:13.65 node

Kibana Error

{"statusCode":500,"error":"Internal Server Error","message":"[parent] Data too large, data for [<http_request>] would be [3127549290/2.9gb], which is larger than the limit of [2993920409/2.7gb], real usage: [3127548776/2.9gb], new bytes reserved: [514/514b], usages [request=0/0b, fielddata=21638/21.1kb, in_flight_requests=514/514b, accounting=599882168/572mb]: [circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [3127549290/2.9gb], which is larger than the limit of [2993920409/2.7gb], real usage: [3127548776/2.9gb], new bytes reserved: [514/514b], usages [request=0/0b, fielddata=21638/21.1kb, in_flight_requests=514/514b, accounting=599882168/572mb], with { bytes_wanted=3127549290 & bytes_limit=2993920409 & durability=\"PERMANENT\" }"}
An internal server error occurred
Version: 7.6.0
Build: 29000
Error: Internal Server Error
    at fetchResponse$ (http://192.168.240.8:5601/bundles/commons.bundle.js:3:2991052)
    at s (http://192.168.240.8:5601/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:338:774546)
    at Generator._invoke (http://192.168.240.8:5601/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:338:774299)
    at Generator.forEach.e.<computed> [as next] (http://192.168.240.8:5601/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:338:774903)
    at s (http://192.168.240.8:5601/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:338:774546)
    at t (http://192.168.240.8:5601/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:338:775041)
    at http://192.168.240.8:5601/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:338:775191

jvm.options

## GC configuration
8-13:-XX:+UseConcMarkSweepGC
8-13:-XX:CMSInitiatingOccupancyFraction=75
8-13:-XX:+UseCMSInitiatingOccupancyOnly

## G1GC Configuration
14-:-XX:+UseG1GC
14-:-XX:G1ReservePercent=25
14-:-XX:InitiatingHeapOccupancyPercent=30

## JVM temporary directory
-Djava.io.tmpdir=${ES_TMPDIR}

## heap dumps

-XX:+HeapDumpOnOutOfMemoryError

# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=/var/lib/elasticsearch

# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log

## JDK 8 GC logging
8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:/var/log/elasticsearch/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m

# JDK 9+ GC logging
9-:-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m

I have tried 4Gs of heap space.

Welcome to our community! :smiley:

Can you please edit your post and format your code/logs/config using the </> button, or markdown style back ticks. It helps to make things easy to read which helps us help you

Changes made.

That's unusual, what sort of request are you making when you get that?

I get the error when going to the home page for Kibana or to Management\Reporting. If I wait long enough the pages will refresh and come up

{"statusCode":500,"error":"Internal Server Error","message":"[parent] Data too large, data for [<http_request>] would be [3111479034/2.8gb], which is larger than the limit of [2993920409/2.7gb], real usage: [3111478520/2.8gb], new bytes reserved: [514/514b], usages [request=0/0b, fielddata=22647/22.1kb, in_flight_requests=514/514b, accounting=607730233/579.5mb]: [circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [3111479034/2.8gb], which is larger than the limit of [2993920409/2.7gb], real usage: [3111478520/2.8gb], new bytes reserved: [514/514b], usages [request=0/0b, fielddata=22647/22.1kb, in_flight_requests=514/514b, accounting=607730233/579.5mb], with { bytes_wanted=3111479034 & bytes_limit=2993920409 & durability=\"PERMANENT\" }"}

Often I am just running daily and weekly reports for Auditbeat, filebeat, or winlog beat with searches like this

event.outcome  : success and event.action:user_login or event.outcome  : success and event.action:"Started-Session"

It sometime gives the following error

SyntaxError: Unexpected token u in JSON at position 0
    at JSON.parse (<anonymous>)
    at http://10.134.90.8:5601/bundles/commons.bundle.js:3:3378900
    at http://10.134.90.8:5601/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:368:94842
    at http://10.134.90.8:5601/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:368:94980
    at u.$digest (http://10.134.90.8:5601/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:368:100155)
    at http://10.134.90.8:5601/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:368:102127
    at Yo.completeTask (http://10.134.90.8:5601/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:368:122692)
    at http://10.134.90.8:5601/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:368:34257

Generating CSV often will also give the error "Reporting Error: Not Found"

Today it has been failing more consistently.

It seems to be definitely a resources issue. On the ELK server I turned of auditbeat, filebeat, and logstash. The Kibana reporting was working perfectly fine now.

That leads me to ask the following questions:

  1. if we turn off logstash are we losing logs?
  2. For the 40 auditbeat and filebeat agents I using the default settings for how often it sends the logs. Is there a better setting I should be using?
  3. Is version 7.8 more resource friendly?

The issue is back even with Logstash turned off. I am getting this error

{"statusCode":500,"error":"Internal Server Error","message":"[parent] Data too large, data for [<http_request>] would be [3101900416/2.8gb], which is larger than the limit of [2993920409/2.7gb], real usage: [3101900416/2.8gb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=22866/22.3kb, in_flight_requests=0/0b, accounting=613438772/585mb]: [circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [3101900416/2.8gb], which is larger than the limit of [2993920409/2.7gb], real usage: [3101900416/2.8gb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=22866/22.3kb, in_flight_requests=0/0b, accounting=613438772/585mb], with { bytes_wanted=3101900416 & bytes_limit=2993920409 & durability=\"PERMANENT\" }"}

When just going to Kibana's home page

http://127.0.0.1:5601/app/kibana

When I do top elasticsearch VIRT column is huge. 238G. How can I troubleshoot why elasticsearch is using so much memory. RES is 3.5G and there is only 8 gigs of physical memory.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.