Unable to start Kibana (7.4) server at boot in CENTOS 7.7

I disabled selinux in an affort to get Kibana started. It worked initially, but the kibana service fails to start. I checked /var/log/kibana/kibana.stderr. The following log entry is present:

"FATAL Error: Port 5601 is already in use. Another instance of Kibana may be running!"

Here are the logs from /var/log/kibana/kibana.stdout

{"type":"log","@timestamp":"2019-10-09T16:20:33Z","tags":["fatal","root"],"pid":2191,"message":"Error: Port 5601 is already in use. Another instance of Kibana may be running!\n at Root.shutdown (/usr/share/kibana/src/core/server/root/index.js:67:18)\n at Root.setup (/usr/share/kibana/src/core/server/root/index.js:46:18)\n at process._tickCallback (internal/process/next_tick.js:68:7)"}
{"type":"log","@timestamp":"2019-10-09T16:38:16Z","tags":["fatal","root"],"pid":2100,"message":"Error: Port 5601 is already in use. Another instance of Kibana may be running!\n at Root.shutdown (/usr/share/kibana/src/core/server/root/index.js:67:18)\n at Root.setup (/usr/share/kibana/src/core/server/root/index.js:46:18)\n at process._tickCallback (internal/process/next_tick.js:68:7)"}
{"type":"log","@timestamp":"2019-10-09T17:31:44Z","tags":["fatal","root"],"pid":2299,"message":"Error: Port 5601 is already in use. Another instance of Kibana may be running!\n at Root.shutdown (/usr/share/kibana/src/core/server/root/index.js:67:18)\n at Root.setup (/usr/share/kibana/src/core/server/root/index.js:46:18)\n at process._tickCallback (internal/process/next_tick.js:68:7)"}
{"type":"log","@timestamp":"2019-10-10T15:38:14Z","tags":["info","plugins-system"],"pid":3079,"message":"Setting up [4] plugins: [security,translations,inspector,data]"}
{"type":"log","@timestamp":"2019-10-10T15:38:14Z","tags":["info","plugins","security"],"pid":3079,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2019-10-10T15:38:14Z","tags":["warning","plugins","security","config"],"pid":3079,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml"}
{"type":"log","@timestamp":"2019-10-10T15:38:14Z","tags":["warning","plugins","security","config"],"pid":3079,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
{"type":"log","@timestamp":"2019-10-10T15:38:14Z","tags":["info","plugins","translations"],"pid":3079,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2019-10-10T15:38:14Z","tags":["info","plugins","data"],"pid":3079,"message":"Setting up plugin"}
{"type":"log","@timestamp":"2019-10-10T15:38:14Z","tags":["info","plugins-system"],"pid":3079,"message":"Starting [3] plugins: [security,translations,data]"}
{"type":"log","@timestamp":"2019-10-10T15:38:21Z","tags":["plugin","warning"],"pid":3079,"path":"/usr/share/kibana/src/legacy/core_plugins/metric_vis","message":"Skipping non-plugin directory at /usr/share/kibana/src/legacy/core_plugins/metric_vis"}
{"type":"log","@timestamp":"2019-10-10T15:38:21Z","tags":["plugin","warning"],"pid":3079,"path":"/usr/share/kibana/src/legacy/core_plugins/table_vis","message":"Skipping non-plugin directory at /usr/share/kibana/src/legacy/core_plugins/table_vis"}
{"type":"log","@timestamp":"2019-10-10T15:38:21Z","tags":["plugin","warning"],"pid":3079,"path":"/usr/share/kibana/src/legacy/core_plugins/tagcloud","message":"Skipping non-plugin directory at /usr/share/kibana/src/legacy/core_plugins/tagcloud"}
{"type":"log","@timestamp":"2019-10-10T15:38:21Z","tags":["plugin","warning"],"pid":3079,"path":"/usr/share/kibana/src/legacy/core_plugins/vega","message":"Skipping non-plugin directory at /usr/share/kibana/src/legacy/core_plugins/vega"}
{"type":"log","@timestamp":"2019-10-10T15:38:23Z","tags":["status","plugin:kibana@7.4.0","info"],"pid":3079,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-10-10T15:38:23Z","tags":["status","plugin:elasticsearch@7.4.0","info"],"pid":3079,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-10-10T15:38:23Z","tags":["status","plugin:xpack_main@7.4.0","info"],"pid":3079,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-10-10T15:38:23Z","tags":["status","plugin:telemetry@7.4.0","info"],"pid":3079,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-10-10T15:38:23Z","tags":["status","plugin:graph@7.4.0","info"],"pid":3079,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-10-10T15:38:23Z","tags":["status","plugin:monitoring@7.4.0","info"],"pid":3079,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-10-10T15:38:23Z","tags":["status","plugin:spaces@7.4.0","info"],"pid":3079,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-10-10T15:38:23Z","tags":["status","plugin:security@7.4.0","info"],"pid":3079,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-10-10T15:38:23Z","tags":["status","plugin:searchprofiler@7.4.0","info"],"pid":3079,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-10-10T15:38:23Z","tags":["status","plugin:ml@7.4.0","info"],"pid":3079,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2019-10-10T15:38:23Z","tags":["status","plugin:tilemap@7.4.0","info"],"pid":3079,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}

I checked for running processes and only found one. I ran lsof -i :5601 to find this info. Output below:

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
node 886 kibana 30u IPv4 20823 0t0 TCP localhost:esmagent (LISTEN)

I am attemping to stand of services and beats for use with ElasticSIEM and this is slowing me down big time. Any help fixing this nagging issue is appreciated.

["fatal","root"],"pid":2299,"message":"Error: Port 5601 is already in use. Another instance of Kibana may be running!\n at Root.shutdown (/usr/share/kibana/src/core/server/root/index.js:67:18)\n at

Stop that instance or kill the process already running on that port and start Kibana again.

When I run the following command:

sudo kill -9 2299

No process was found with this PID. If I attempt to manually start the kibana service:

sudo systemctl start kibana.service

The service fails to start. Error from kibana.stdout below:

["fatal","root"],"pid":2924,"message":"Error: Port 5601 is already in use. Another instance of Kibana may be running!\n at Root.shutdown (/usr/share/kibana/src/core/server/root/index.js:67:18)\n at Root.setup (/usr/share/kibana/src/core/server/root/index.js:46:18)\n at process._tickCallback (internal/process/next_tick.js:68:7)"}

If I run kill -9 2924, the process isn't found?!?

kill: sending signal to 2924 failed. No such process.

Here is a list of the running processes related to elastic and kibana:

USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
kibana 889 1.3 10.5 1867936 409084 ? Ssl 08:35 1:17 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
elastic+ 1247 7.5 36.9 4058264 1432916 ? Ssl 08:35 7:08 /usr/share/elasticsearch/jdk/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch-16522193326819109235 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m -Djava.locale.providers=COMPAT -Dio.netty.allocator.type=unpooled -XX:MaxDirectMemorySize=536870912 -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -Des.bundled_jdk=true -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
elastic+ 1870 0.0 0.1 68860 5556 ? Sl 08:35 0:00 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller

And here is the service status list:

● auditbeat.service - Audit the activities of users and processes on your system.
Loaded: loaded (/usr/lib/systemd/system/auditbeat.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Docs: https://www.elastic.co/products/beats/auditbeat
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
Loaded: loaded (/usr/lib/systemd/system/filebeat.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Docs: https://www.elastic.co/products/beats/filebeat
● heartbeat-elastic.service - Ping remote services for availability and log results to Elasticsearch or send to Logstash.
Loaded: loaded (/usr/lib/systemd/system/heartbeat-elastic.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Docs: https://www.elastic.co/products/beats/heartbeat
kibana is not running
netconsole module not loaded
Configured devices:
lo ens192
Currently active devices:
lo ens192

Bump. I could use some help here. I installed htop to get more intel on the user and process activity. I found large numbers of processes that look similar to the following:

1248 elasticse 20 0 4229M 1376M 43440 S 15.9 36.3 1:56.94 ├─ /usr/share/elasticsearch/jdk/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseC
895 kibana 20 0 1770M 339M 21252 S 2.8 9.0 0:24.82 ├─ /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml

The output of service --status-all shows kibana is not running, but the kibana user above is touching the kibana.yml file among others. Again, when I attempt to manually start the Kibana service, it fails to start indicating that port 5601 is in use as previously posted.