In RHEL OS Kibana stop results in core dump generation

In RHEL OS, when we try to stop kibana it results in core dump generation.
OS Version: RHEL 7.8
While checking the log it shows the node process which is running received SIG QUIT signal.
This is happening particularly in RHEL OS.
Kibana Version: 7.7.1

Can you provide the Kibana logs?
What is happening when it dumps?

I couldn't provide the whole log file but i can give you a abstract log entries
{"type":"log","@timestamp":"2020-05-12T07:14:18Z","tags":["warning","config","deprecation"],"pid":119501,"message":"You should set server.basePath along with server.rewriteBasePath. Starting in 7.0, Kibana will expect that all requests start with server.basePath rather than expecting you to rewrite the requests in your reverse proxy. Set server.rewriteBasePath to false to preserve the current behavior and silence this warning."}
{"type":"log","@timestamp":"2020-05-12T07:14:19Z","tags":["info","optimize"],"pid":119501,"message":"Optimizing and caching bundles for kibana, stateSessionStorageRedirect, status_page and timelion. This may take a few minutes"}
Browserslist: caniuse-lite is outdated. Please run next command npm update caniuse-lite browserslist
Browserslist: caniuse-lite is outdated. Please run next command npm update caniuse-lite browserslist
Browserslist: caniuse-lite is outdated. Please run next command npm update caniuse-lite browserslist
Browserslist: caniuse-lite is outdated. Please run next command npm update caniuse-lite browserslist
{"type":"log","@timestamp":"2020-05-12T07:14:39Z","tags":["info","optimize"],"pid":119501,"message":"Optimization of bundles for kibana, stateSessionStorageRedirect, status_page and timelion complete in 20.71 seconds"}
{"type":"log","@timestamp":"2020-05-12T07:14:39Z","tags":["status","plugin:apm_oss@7.2.0","info"],"pid":119501,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2020-05-12T07:14:40Z","tags":["status","plugin:kibana@7.2.0","info"],"pid":119501,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2020-05-12T07:14:40Z","tags":["status","plugin:elasticsearch@7.2.0","info"],"pid":119501,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2020-05-12T07:14:40Z","tags":["status","plugin:data@7.2.0","info"],"pid":119501,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2020-05-12T07:14:40Z","tags":["status","plugin:interpreter@7.2.0","info"],"pid":119501,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2020-05-12T07:14:40Z","tags":["status","plugin:metrics@7.2.0","info"],"pid":119501,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2020-05-12T07:14:40Z","tags":["status","plugin:tile_map@7.2.0","info"],"pid":119501,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2020-05-12T07:14:40Z","tags":["status","plugin:timelion@7.2.0","info"],"pid":119501,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2020-05-12T07:14:40Z","tags":["status","plugin:ui_metric@7.2.0","info"],"pid":119501,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2020-05-12T07:14:40Z","tags":["status","plugin:elasticsearch@7.2.0","info"],"pid":119501,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2020-05-12T07:14:40Z","tags":["info","migrations"],"pid":119501,"message":"Creating index ***********_dashboard_2."}
{"type":"log","@timestamp":"2020-05-12T07:14:40Z","tags":["info","migrations"],"pid":119501,"message":"Reindexing ***********_dashboard to ***********_dashboard_1"}
{"type":"log","@timestamp":"2020-05-12T07:14:41Z","tags":["info","migrations"],"pid":119501,"message":"Migrating ***********_dashboard_1 saved objects to ***********_dashboard_2"}
{"type":"log","@timestamp":"2020-05-12T07:14:41Z","tags":["info","migrations"],"pid":119501,"message":"Pointing alias ***********_dashboard to ***********_dashboard_2."}
{"type":"log","@timestamp":"2020-05-12T07:14:41Z","tags":["info","migrations"],"pid":119501,"message":"Finished in 1493ms."}
{"type":"log","@timestamp":"2020-05-12T07:14:41Z","tags":["listening","info"],"pid":119501,"message":"Server running at http://0.0.0.0:9405"}

while stopping kibana core dump got generated in the /bin folder with name core119501 and when tried to analyze the core dump, found that the node process received SIG QUIT

couldn't attach the core dump file, if you need i will share the core dump file

kibana.yml file looks like this

server.port: 9405
server.host: "0.0.0.0"
server.basePath: "/xxxxxxxxxxxxx/dashboardproxy"
elasticsearch.hosts: "http://localhost:9240"
kibana.index: "xxxxxxxxxxxx_dashboard"
pid.file: "kibana.pid"
console.enabled: false
logging.quiet: true
logging.verbose: false
logging.silent: false

Hi @warkolm any pointers on what went wrong and what caused this problem.

After analyzing one of those core dumps, by using "gdb", the stack shows:

#0 0x00007f04115a8ee7 in madvise () from /lib64/libc.so.6
#1 0x00000000016c8da0 in v8::base::OS::SetPermissions(void*, unsigned long, v8::base::OS::MemoryPermission) ()
#2 0x0000000000ace3ce in v8::internal::SetPermissions(void*, unsigned long, v8::PageAllocator::Permission) ()
#3 0x0000000000f15027 in v8::internal::MemoryChunk::SetReadAndWritable() ()
#4 0x0000000000ebb7d4 in v8::internal::Heap::InvalidateCodeDeoptimizationData(v8::internal::Code*) ()
#5 0x0000000000e2ce5c in v8::internal::Deoptimizer::DeoptimizeMarkedCodeForContext(v8::internal::Context*) ()
#6 0x0000000000e2d308 in v8::internal::Deoptimizer::DeoptimizeMarkedCode(v8::internal::Isolate*) ()

any updates on this team, because seeing this in multiple RHEL environments, is it specific to the OS

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.