I also discovered this in syslog. I'm not yet sure what it might mean but I share it in case it confirms someone's suspicions about the source of my problem:
syslog.1:Mar 12 07:13:19 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T07:13:19Z","tags":["status","plugin:logstash@6.6.1","info"],"pid":1044,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
syslog.1:Mar 12 07:13:19 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T07:13:19Z","tags":["status","plugin:logstash@6.6.1","error"],"pid":1044,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
syslog.1:Mar 12 07:13:19 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T07:13:19Z","tags":["status","plugin:logstash@6.6.1","error"],"pid":1044,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
syslog.1:Mar 12 07:13:35 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T07:13:35Z","tags":["status","plugin:logstash@6.6.1","info"],"pid":1044,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"No Living connections"}
syslog.1:Mar 12 09:36:54 ip-172-30-0-162 systemd[1]: Started logstash.
syslog.1:Mar 12 09:41:29 ip-172-30-0-162 systemd[1]: Stopping logstash...
syslog.1:Mar 12 09:42:33 ip-172-30-0-162 systemd[1]: logstash.service: Main process exited, code=exited, status=143/n/a
syslog.1:Mar 12 09:42:33 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T09:42:31Z","tags":["status","plugin:logstash@6.6.1","error"],"pid":1044,"state":"red","message":"Status changed from green to red - Request Timeout after 30000ms","prevState":"green","prevMsg":"Ready"}
syslog.1:Mar 12 09:42:33 ip-172-30-0-162 systemd[1]: logstash.service: Failed with result 'exit-code'.
syslog.1:Mar 12 09:42:33 ip-172-30-0-162 systemd[1]: Stopped logstash.
syslog.1:Mar 12 09:42:33 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T09:42:32Z","tags":["status","plugin:logstash@6.6.1","info"],"pid":1044,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 30000ms"}
syslog.1:Mar 12 10:47:15 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T10:47:13Z","tags":["status","plugin:logstash@6.6.1","error"],"pid":1044,"state":"red","message":"Status changed from green to red - Request Timeout after 3000ms","prevState":"green","prevMsg":"Ready"}
syslog.1:Mar 12 10:47:15 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T10:47:14Z","tags":["status","plugin:logstash@6.6.1","info"],"pid":1044,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
syslog.1:Mar 12 10:48:06 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T10:47:51Z","tags":["status","plugin:logstash@6.6.1","error"],"pid":1044,"state":"red","message":"Status changed from green to red - Request Timeout after 3000ms","prevState":"green","prevMsg":"Ready"}
syslog.1:Mar 12 10:49:22 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T10:49:21Z","tags":["status","plugin:logstash@6.6.1","error"],"pid":1044,"state":"red","message":"Status changed from red to red - Request Timeout after 30000ms","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
syslog.1:Mar 12 10:50:00 ip-172-30-0-162 kibana[1044]: {"type":"log","@timestamp":"2019-03-12T10:49:25Z","tags":["status","plugin:logstash@6.6.1","info"],"pid":1044,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Request Timeout after 30000ms"}
syslog.1:Mar 12 10:55:29 ip-172-30-0-162 kibana[1023]: {"type":"log","@timestamp":"2019-03-12T10:55:29Z","tags":["status","plugin:logstash@6.6.1","info"],"pid":1023,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
syslog.1:Mar 12 10:55:30 ip-172-30-0-162 kibana[1023]: {"type":"log","@timestamp":"2019-03-12T10:55:30Z","tags":["status","plugin:logstash@6.6.1","error"],"pid":1023,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
syslog.1:Mar 12 10:55:30 ip-172-30-0-162 kibana[1023]: {"type":"log","@timestamp":"2019-03-12T10:55:30Z","tags":["status","plugin:logstash@6.6.1","error"],"pid":1023,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
syslog.1:Mar 12 10:55:49 ip-172-30-0-162 kibana[1023]: {"type":"log","@timestamp":"2019-03-12T10:55:49Z","tags":["status","plugin:logstash@6.6.1","info"],"pid":1023,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"No Living connections"}
All I know is that until I start logstash the server is running perfectly and then it goes to near 100% (according to top). But this is weird. Top doesn't show logstash is consuming a lot of resources, I see some strange "shim" processes and other random things but not logstash. But the high utilization only occurs when I start logstash.
Also, this might be relevant to the syslog messages:
ubuntu@ip-172-30-0-162:~$ curl http://127.0.0.1:9200
{
"name" : "3sHDhp8",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "_RRhXzMkQRi2cl9ree9S5Q",
"version" : {
"number" : "6.6.1",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "1fd8f69",
"build_date" : "2019-02-13T17:10:04.160291Z",
"build_snapshot" : false,
"lucene_version" : "7.6.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
This tells me elastic search is actually listening