#Unable to connect to Elasticsearch

Below is kibana.yml and elasticsearch.yml configuration

kibana.yml

server.host: "myip"
elasticsearch.url: "https://myip:9200"
elasticsearch.username: "kibana"
elasticsearch.password: "password"
elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/elastic-ca.pem" ]

elasticsearch.yml

cluster.name: elastic-cluster
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: myip
action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: config/certs/elastic-certificates.p12

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: config/certs/elastic-certificates.p12
xpack.security.http.ssl.truststore.path: config/certs/elastic-certificates.p12

Below is the error which is shown in the elasticsearch log.

less /var/log/elasticsearch/elastic-cluster.log

[2018-04-24T18:26:30,326][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [node-1] caught exception while handling client http traffic, closing connection [id: 0x897005a0, L:0.0.0.0/0.0.0.0:9200 ! R:/ip:58318]
io.netty.handler.codec.DecoderException: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f20485454502f312e310d0a486f73743a2031302e3136392e33332e3139383a393230300d0a557365722d4167656e743a20476f2d687474702d636c69656e742f312e310d0a4163636570743a206170706c69636174696f6e2f6a736f6e0d0a4163636570742d456e636f64696e673a20677a69700d0a0d0a
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.16.Final.jar:4.1.16.Final]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91]
Caused by: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f20485454502f312e310d0a486f73743a2031302e3136392e33332e3139383a393230300d0a557365722d4167656e743a20476f2d687474702d636c69656e742f312e310d0a4163636570743a206170706c69636174696f6e2f6a736f6e0d0a4163636570742d456e636f64696e673a20677a69700d0a0d0a
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1106) ~[?:?]
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1162) ~[?:?]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[?:?]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[?:?]
... 15 more

Logs available in kibana

less /var/log/kibana/kibana.stdout

{"type":"log","@timestamp":"2018-04-24T13:06:14Z","tags":["warning","elasticsearch","admin"],"pid":9965,"message":"No living connections"}
{"type":"log","@timestamp":"2018-04-24T13:06:17Z","tags":["warning","elasticsearch","admin"],"pid":9965,"message":"Unable to revive connection: https://myip:9200/"}
{"type":"log","@timestamp":"2018-04-24T13:06:17Z","tags":["warning","elasticsearch","admin"],"pid":9965,"message":"No living connections"}
{"type":"log","@timestamp":"2018-04-24T13:06:19Z","tags":["warning","elasticsearch","admin"],"pid":9965,"message":"Unable to revive connection: https://myip:9200/"}
{"type":"log","@timestamp":"2018-04-24T13:06:19Z","tags":["warning","elasticsearch","admin"],"pid":9965,"message":"No living connections"}
{"type":"log","@timestamp":"2018-04-24T13:06:22Z","tags":["warning","elasticsearch","admin"],"pid":9965,"message":"Unable to revive connection: https://myip:9200/"}
{"type":"log","@timestamp":"2018-04-24T13:06:22Z","tags":["warning","elasticsearch","admin"],"pid":9965,"message":"No living connections"}
{"type":"log","@timestamp":"2018-04-24T13:06:24Z","tags":["warning","elasticsearch","admin"],"pid":9965,"message":"Unable to revive connection: https://myip:9200/"}
{"type":"log","@timestamp":"2018-04-24T13:06:24Z","tags":["warning","elasticsearch","admin"],"pid":9965,"message":"No living connections"}
{"type":"log","@timestamp":"2018-04-24T13:06:27Z","tags":["warning","elasticsearch","admin"],"pid":9965,"message":"Unable to revive connection: https://myip:9200/"}
{"type":"log","@timestamp":"2018-04-24T13:06:27Z","tags":["warning","elasticsearch","admin"],"pid":9965,"message":"No living connections"}
(END)

A couple of things:

  1. Your logs are not from the same time. It's really hard to correlate and see what's happening if you don't share the correct part of the logs.
  2. The errors in elasticsearch.log is from a beat attempting to connect to elasticsearch over plain http. You haven't mentioned that you have a beat configured, it would have been nice if you gave us the whole setup.
    The
474554202f20485454502f312e310d0a486f73743a2031302e3136392e33332e3139383a393230300d0a557365722d4167656e743a20476f2d687474702d636c69656e742f312e310d0a4163636570743a206170706c69636174696f6e2f6a736f6e0d0a4163636570742d456e636f64696e673a20677a69700d0a0d0a

you see in the logs in the hex encoded request from a beat to elasticsearch and decodes to

GET / HTTP/1.1
Host: 10.169.33.198:9200
User-Agent: Go-http-client/1.1
Accept: application/json
Accept-Encoding: gzip

So, please stop anything else that attempts to communicate to Elasticsearch. Restart Kibana and Elasticsearch and share the logs from the same point in time when it fails so that we can see what is going wrong.

Find the below logs when restarting kibana and elasticsearch. There was filebeat which were started in another server and i have stoped it.

Getting any idea from below logs

note : In logs i have replaced my orginal IP to myIP.

Kibana Log

{"type":"log","@timestamp":"2018-04-25T06:17:36Z","tags":["plugin","warning"],"pid":22732,"path":"/usr/share/kibana/plugins/x-pack","message":"Skipping non-plugin directory at /usr/share/kibana/plugins/x-pack"}

{"type":"log","@timestamp":"2018-04-25T06:17:37Z","tags":["status","plugin:kibana@6.2.3","info"],"pid":22732,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2018-04-25T06:17:37Z","tags":["status","plugin:elasticsearch@6.2.3","info"],"pid":22732,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}

{"type":"log","@timestamp":"2018-04-25T06:17:37Z","tags":["status","plugin:timelion@6.2.3","info"],"pid":22732,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2018-04-25T06:17:37Z","tags":["status","plugin:console@6.2.3","info"],"pid":22732,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2018-04-25T06:17:37Z","tags":["status","plugin:metrics@6.2.3","info"],"pid":22732,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2018-04-25T06:17:37Z","tags":["listening","info"],"pid":22732,"message":"Server running at http://myIP:5601"}
{"type":"log","@timestamp":"2018-04-25T06:17:37Z","tags":["error","elasticsearch","admin"],"pid":22732,"message":"Request error, retrying\nHEAD https://myIP:9200/ => Hostname/IP doesn't match certificate's altnames: "IP: myIP is not in the cert's list: ""}
{"type":"log","@timestamp":"2018-04-25T06:17:37Z","tags":["warning","elasticsearch","admin"],"pid":22732,"message":"Unable to revive connection: https://myIP:9200/"}
{"type":"log","@timestamp":"2018-04-25T06:17:37Z","tags":["warning","elasticsearch","admin"],"pid":22732,"message":"No living connections"}
{"type":"log","@timestamp":"2018-04-25T06:17:37Z","tags":["status","plugin:elasticsearch@6.2.3","error"],"pid":22732,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at https://myIP:9200.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2018-04-25T06:17:40Z","tags":["warning","elasticsearch","admin"],"pid":22732,"message":"Unable to revive connection: https://myIP:9200/"}
{"type":"log","@timestamp":"2018-04-25T06:17:40Z","tags":["warning","elasticsearch","admin"],"pid":22732,"message":"No living connections"}
{"type":"log","@timestamp":"2018-04-25T06:17:42Z","tags":["warning","elasticsearch","admin"],"pid":22732,"message":"Unable to revive connection: https://myIP:9200/"}
{"type":"log","@timestamp":"2018-04-25T06:17:42Z","tags":["warning","elasticsearch","admin"],"pid":22732,"message":"No living connections"}
{"type":"log","@timestamp":"2018-04-25T06:17:45Z","tags":["warning","elasticsearch","admin"],"pid":22732,"message":"Unable to revive connection: https://myIP:9200/"}

Elasticsearch Log

[2018-04-25T11:52:27,898][INFO ][o.e.n.Node ] [node-1] stopping ...
[2018-04-25T11:52:27,906][INFO ][o.e.x.w.WatcherService ] [node-1] stopping watch service, reason [shutdown initiated]
[2018-04-25T11:52:27,907][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/22252] [Main.cc@168] Ml controller exiting
[2018-04-25T11:52:27,907][INFO ][o.e.x.m.j.p.NativeController] Native controller process has stopped - no new native processes can be started
[2018-04-25T11:52:28,056][INFO ][o.e.n.Node ] [node-1] stopped
[2018-04-25T11:52:28,056][INFO ][o.e.n.Node ] [node-1] closing ...
[2018-04-25T11:52:28,069][INFO ][o.e.n.Node ] [node-1] closed
[2018-04-25T11:52:55,063][INFO ][o.e.n.Node ] [node-1] initializing ...
[2018-04-25T11:52:55,114][INFO ][o.e.e.NodeEnvironment ] [node-1] using [1] data paths, mounts [[/var (/dev/mapper/vg00-var)]], net usable_space [7.3gb], net total_space [10.4gb], types [ext4]
[2018-04-25T11:52:55,115][INFO ][o.e.e.NodeEnvironment ] [node-1] heap size [990.7mb], compressed ordinary object pointers [true]
[2018-04-25T11:52:55,165][INFO ][o.e.n.Node ] [node-1] node name [node-1], node ID [oD2QqZ2kSHaGCZRs_LvWmA]
[2018-04-25T11:52:55,165][INFO ][o.e.n.Node ] [node-1] version[6.2.3], pid[23044], build[c59ff00/2018-03-13T10:06:29.741383Z], OS[Linux/3.10.0-327.18.2.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_91/25.91-b14]
[2018-04-25T11:52:55,166][INFO ][o.e.n.Node ] [node-1] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.4R5B0Tir, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:/var/log/elasticsearch/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch]
[2018-04-25T11:52:56,084][ERROR][o.e.x.c.s.SSLService ] [node-1] unsupported ciphers [[TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA]] were requested but cannot be used in this JVM, however there are supported ciphers that will be used [[TLS_RSA_WITH_AES_256_CBC_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA]]. If you are trying to use ciphers with a key length greater than 128 bits on an Oracle JVM, you will need to install the unlimited strength JCE policy files.
[2018-04-25T11:52:56,104][ERROR][o.e.x.c.s.SSLService ] [node-1] unsupported ciphers [[TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA]] were requested but cannot be used in this JVM, however there are supported ciphers that will be used [[TLS_RSA_WITH_AES_256_CBC_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA]]. If you are trying to use ciphers with a key length greater than 128 bits on an Oracle JVM, you will need to install the unlimited strength JCE policy files.
[2018-04-25T11:52:56,113][ERROR][o.e.x.c.s.SSLService ] [node-1] unsupported ciphers [[TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA]] were requested but cannot be used in this JVM, however there are supported ciphers that will be used [[TLS_RSA_WITH_AES_256_CBC_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA]]. If you are trying to use ciphers with a key length greater than 128 bits on an Oracle JVM, you will need to install the unlimited strength JCE policy files.
[2018-04-25T11:52:56,437][WARN ][o.e.x.w.Watcher ] the [action.auto_create_index] setting is configured to be restrictive [.security,.monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*]. for the next 6 months daily history indices are allowed to be created, but please make sure that any future history indices after 6 months with the pattern [.watcher-history-YYYY.MM.dd] are allowed to be created

Elasticsearch Log ...( continues )

[2018-04-25T11:52:56,885][INFO ][o.e.p.PluginsService ] [node-1] loaded module [aggs-matrix-stats]
[2018-04-25T11:52:56,885][INFO ][o.e.p.PluginsService ] [node-1] loaded module [analysis-common]
[2018-04-25T11:52:56,885][INFO ][o.e.p.PluginsService ] [node-1] loaded module [ingest-common]
[2018-04-25T11:52:56,885][INFO ][o.e.p.PluginsService ] [node-1] loaded module [lang-expression]
[2018-04-25T11:52:56,885][INFO ][o.e.p.PluginsService ] [node-1] loaded module [lang-mustache]
[2018-04-25T11:52:56,885][INFO ][o.e.p.PluginsService ] [node-1] loaded module [lang-painless]
[2018-04-25T11:52:56,885][INFO ][o.e.p.PluginsService ] [node-1] loaded module [mapper-extras]
[2018-04-25T11:52:56,885][INFO ][o.e.p.PluginsService ] [node-1] loaded module [parent-join]
[2018-04-25T11:52:56,885][INFO ][o.e.p.PluginsService ] [node-1] loaded module [percolator]
[2018-04-25T11:52:56,885][INFO ][o.e.p.PluginsService ] [node-1] loaded module [rank-eval]
[2018-04-25T11:52:56,885][INFO ][o.e.p.PluginsService ] [node-1] loaded module [reindex]
[2018-04-25T11:52:56,885][INFO ][o.e.p.PluginsService ] [node-1] loaded module [repository-url]
[2018-04-25T11:52:56,885][INFO ][o.e.p.PluginsService ] [node-1] loaded module [transport-netty4]
[2018-04-25T11:52:56,885][INFO ][o.e.p.PluginsService ] [node-1] loaded module [tribe]
[2018-04-25T11:52:56,886][INFO ][o.e.p.PluginsService ] [node-1] loaded plugin [x-pack-core]
[2018-04-25T11:52:56,886][INFO ][o.e.p.PluginsService ] [node-1] loaded plugin [x-pack-deprecation]
[2018-04-25T11:52:56,886][INFO ][o.e.p.PluginsService ] [node-1] loaded plugin [x-pack-graph]
[2018-04-25T11:52:56,886][INFO ][o.e.p.PluginsService ] [node-1] loaded plugin [x-pack-logstash]
[2018-04-25T11:52:56,886][INFO ][o.e.p.PluginsService ] [node-1] loaded plugin [x-pack-ml]
[2018-04-25T11:52:56,886][INFO ][o.e.p.PluginsService ] [node-1] loaded plugin [x-pack-monitoring]
[2018-04-25T11:52:56,886][INFO ][o.e.p.PluginsService ] [node-1] loaded plugin [x-pack-security]
[2018-04-25T11:52:56,886][INFO ][o.e.p.PluginsService ] [node-1] loaded plugin [x-pack-upgrade]
[2018-04-25T11:52:56,886][INFO ][o.e.p.PluginsService ] [node-1] loaded plugin [x-pack-watcher]
[2018-04-25T11:52:59,012][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/23118] [Main.cc@128] controller (64 bit): Version 6.2.3 (Build e43a9a2b267ef4) Copyright (c) 2018 Elasticsearch BV
[2018-04-25T11:53:00,040][DEBUG][o.e.a.ActionModule ] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2018-04-25T11:53:00,392][INFO ][o.e.d.DiscoveryModule ] [node-1] using discovery type [zen]
[2018-04-25T11:53:01,065][INFO ][o.e.n.Node ] [node-1] initialized
[2018-04-25T11:53:01,065][INFO ][o.e.n.Node ] [node-1] starting ...
[2018-04-25T11:53:01,198][INFO ][o.e.t.TransportService ] [node-1] publish_address {myIP:9300}, bound_addresses {myIP:9300}
[2018-04-25T11:53:01,239][INFO ][o.e.b.BootstrapChecks ] [node-1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2018-04-25T11:53:04,285][INFO ][o.e.c.s.MasterService ] [node-1] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {node-1}{oD2QqZ2kSHaGCZRs_LvWmA}{Q9fzAeIJREGM2QQerJWgFg}{myIP}{myIP:9300}{ml.machine_memory=8203489280, ml.max_open_jobs=20, ml.enabled=true}
[2018-04-25T11:53:04,289][INFO ][o.e.c.s.ClusterApplierService] [node-1] new_master {node-1}{oD2QqZ2kSHaGCZRs_LvWmA}{Q9fzAeIJREGM2QQerJWgFg}{myIP}{myIP:9300}{ml.machine_memory=8203489280, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {node-1}{oD2QqZ2kSHaGCZRs_LvWmA}{Q9fzAeIJREGM2QQerJWgFg}{myIP}{myIP:9300}{ml.machine_memory=8203489280, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-04-25T11:53:04,302][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [node-1] publish_address {myIP:9200}, bound_addresses {myIP:9200}
[2018-04-25T11:53:04,303][INFO ][o.e.n.Node ] [node-1] started
[2018-04-25T11:53:05,134][INFO ][o.e.l.LicenseService ] [node-1] license [40cb1277-ba47-49c6-94b8-bcad5d4b2410] mode [trial] - valid
[2018-04-25T11:53:05,136][WARN ][o.e.l.LicenseService ] [node-1]

License [will expire] on [Sunday, May 06, 2018]. If you have a new license, please update it.

Otherwise, please reach out to your support contact.

Commercial plugins operate with reduced functionality on license expiration:

- security

- Cluster health, cluster stats and indices stats operations are blocked

- All data operations (read and write) continue to work

- watcher

- PUT / GET watch APIs are disabled, DELETE watch API continues to work

- Watches execute and write to the history

- The actions of the watches don't execute

- monitoring

- The agent will stop collecting cluster and indices metrics

- The agent will stop automatically cleaning indices older than [xpack.monitoring.history.duration]

- graph

- Graph explore APIs are disabled

- ml

- Machine learning APIs are disabled

- logstash

- Logstash will continue to poll centrally-managed pipelines

- deprecation

- Deprecation APIs are disabled

- upgrade

- Upgrade API is disabled

[2018-04-25T11:53:05,145][INFO ][o.e.g.GatewayService ] [node-1] recovered [32] indices into cluster_state
[2018-04-25T11:53:06,905][INFO ][o.e.c.r.a.AllocationService] [node-1] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.watcher-history-7-2018.04.06][0]] ...])

  1. Please use the </> button to format your posts, it is really really difficult to read through them as it is now.
  2. Your logs are again from different times. You restarted elasticserach on 2018-04-25T11:52:27,89 yet you share logs from kibana on 2018-04-25T06:17:36Z

As you can see in your kibana.log there is the following line

["error","elasticsearch","admin"],"pid":22732,"message":"Request error, retrying\nHEAD https://myIP:9200/ => Hostname/IP doesn't match certificate's altnames: "IP: myIP is not in the cert's list: ""}

This means that you didn't set the IP in the SubjectAlternativeName of the certificate when creating them with the certutil utility, so Kibana can't verify the certificate that Elasticseach is presenting to it for https. You can solve this in two ways:

a) Recreate the certificate with certutil using the instructions in the docs and passing the correct parameters ( see --ip ). You would then need to re-export the elastic-ca.pem file from the p12 keystore.

or

b) set

elasticsearch.ssl.verificationMode: certificate

in kibana.yml so that Kibana will not attempt to perform hostname validation for the certificate that Elasticsearch is presenting it.

i have set ,

elasticsearch.ssl.verificationMode: certificate

and the issue resolved Thank you very much

But i am not able to see xpack features in kibana.

  • Which used did you log in to kibana as ?
  • What license do you have ? (Go to Dev Tools and run GET _xpack/license from the console)
  • Can you verify that you haven't disabled features in your kibana.yml ?

1,i have logged as elastic.

2,Trial license , see the output below.

3,what extra features i need enable in kibana?

I didn't say you have to enable something, they should be enabled by default. What I asked if you have disabled them explicitly in your config files.

For example if you have something like
xpack.ml.enabled: false in your kibana.yml you'd need to remove this.

Do you have any settings of this type in your config files ?

I dont have any such lines in my kibana.yml.

below is my kibana.yml

kibana.yml
server.host: "myip"
elasticsearch.url: "https://myip:9200"
elasticsearch.username: "kibana"
elasticsearch.password: "password"
elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/elastic-ca.pem" ]

This looks like there is some issue with your Kibana x-pack installation

You have either changed the permissions of the /usr/share/kibana/plugins/x-pack directory or you're running kibana as a user who doesn't have access to that directory

What is the output of

ps au | grep kibana

and

ls -la /usr/share/kibana/plugins/x-pack

?

You can always remove x-pack from kibana

/usr/share/kibana/bin/kibana-plugin remove x-pack

and reinstall it as described here

Please find the output below

elasticsearch]# ps au | grep kibana
kibana 5227 0.8 1.3 1146168 108096 pts/2 Sl 18:01 0:13 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
root 7559 0.0 0.0 112648 972 pts/2 S+ 18:25 0:00 grep --color=auto kibana

< elasticsearch]# ls -la /usr/share/kibana/plugins/x-pack
total 104
drwx------ 7 root root 4096 Apr 10 18:16 .
drwxrwxr-x 4 kibana kibana 4096 Apr 11 12:44 ..
drwx------ 2 root root 4096 Apr 10 18:15 common
-rw------- 1 root root 1122 Apr 10 18:15 index.js
-rw------- 1 root root 49465 Apr 10 18:15 LICENSE.txt
drwx------ 325 root root 12288 Apr 10 18:16 node_modules
-rw------- 1 root root 7 Apr 10 18:15 .node-version
-rw------- 1 root root 247 Apr 10 18:15 NOTICE.txt
-rw------- 1 root root 2448 Apr 10 18:16 package.json
drwx------ 17 root root 4096 Apr 10 18:16 plugins
drwx------ 3 root root 4096 Apr 10 18:16 server
drwx------ 2 root root 4096 Apr 10 18:16 webpackShims

So, as you can see here, the files under this directory are owned by root and group-owned by root and only root user has permissions over them ( see for example -rw------- 1 root root 1122 Apr 10 18:15 index.js which means that only the owner (root) can read and write this file )

I am not sure how you could have ended up in this situation, so please remove x-pack for kibana and reinstall it using the instructions I have provided above:

Wow.... Thats works ..Thank you you very much...now i have to work on index.

Hi ,
could you please help me with one more issue which i am facing with filebeat

i am getting below issue from filebeat log

Failed to connect: Get https://myIP:9200: x509: cannot validate certificate for MyIP because it doesn't contain any IP SANs

We already solved this issue for kibana a few posts up

You just have to do the same in your filebeat configuration. You can read the documentation here .

As you can see there filebeat doesn't support certificate for verification mode, so you would either need to do a) in my quoted answer, or (not advised for anything else than testing) set verification mode to none. Note that this will make the communication between filebeat and Elasticsearch susceptible to MITM attacks.

i have tried option b) but it does not work.

i am going to try the option a).

correct me if i am wrong.

  1. i will be creating certificate in elasticsearch using certutil (with --ip )
  2. export it into pem.
    3.And will copy the same certificate into filebeat server.
1 Like

Once more, if you never

  • Describe explicitly what you've done and share your config
  • Tell us exactly what went wrong, show your logs and error messages

it gets really, really hard to try and help you with your issues.

It's not that simple. If you regenarate the certificates that elasticsearch uses for TLS with certutil, this will regenerate your CA certificates and as such, you would need to reconfigure kibana also to trust the new certificate. You might have the CA key/certificate from the first time, and you might not, but you haven't shared what you did in order to generate the certificates the first time so again, it;s really hard to answer this question explicitly.

I would go about it like this :

  1. Regenarate the certificate bundle and the new CA with certutil

     bin/x-pack/certutil cert --ip <your_ip_here>
    
  2. Assuming you used the default name elastic-certificates.p12, go ahead and copy this over the old elastic-certificates.p12 you had in config/certs/

  3. Export the CA file in pem format to be used in Kibana and Filebeat:

    openssl pkcs12 -in elastic-certificates.p12 -cacerts -nokeys -out elastic-ca.pem
    
  4. Copy elastic-ca.pem to /etc/kibana and to where your Filebeat runs

  5. In your filebeat config set

    output.elasticsearch.ssl.certificate_authorities: ["/your/path/to/elastic-ca.pem"]
    
  6. In kibana.yml remove the following:

    elasticsearch.ssl.verificationMode: certificate
    

If you still get any errors, please share with us all your configuration files, the error messages and the logs from the components that give these errors.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.