Elasticsearch and Kibana ( 6.3.2 ) installed . Xpack is enabled. (ELK and Xpack in ( 10.100.234.241))
Working well . ( In kibana.yml username passwords entered . Authentication success . elastic:changeme )
For a cluster creation I am added two lines in elasticsearch.yml . This ES is master node.
node.master: true
node.data: false
Then kibana showing error message
[warning][license][xpack] License information from the X-Pack plugin could not be obtained from Elasticsearch for the [data] cluster. [security_exception] failed to authenticate user [elastic], with { header={ WWW-Authenticate="Basic realm="security" charset="UTF-8"" } } :: {"path":"/_xpack","statusCode":401,"response":"{"error":{"root_cause":[{"type":"security_exception","reason":"failed to authenticate user [elastic]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"failed to authenticate user [elastic]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}","wwwAuthenticateDirective":"Basic realm="security" charset="UTF-8""}
If i am disabling new lines by # in ES #node.master: true #node.data: false
Then error will go . And ES and Kibana works well .
How it is happening?
I am runninng one another machine for ES as data node ( 10.100.234.240)
As best I can tell, your data node is not actually connected to your cluster, so when you disabled the "data" role on your master node, you no longer had anywhere to store you data, and your cluster turned red
A cluster with 1 master-only node, and 1 dats-only node is very strange. What are you trying to achieve?
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.16.Final.jar:4.1.16.Final] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
[2018-08-28T13:19:55,481][INFO ][o.e.x.s.a.AuthenticationService] [Redhat] Authentication of [elastic] was terminated by realm [reserved] - failed to authenticate user [elastic]
As per your suggestion I am Deleted all Indexes and indexed one more time . I think the security indexes are cleared. Now cluster is working together .
Shards and replicas are created.
Now I recognized my fault.
I am indexed the the data in to elasticsearch nodes. That time they are not in a cluster. Just two different machines. Because of that no replicas are created.
When I am connected each other .. they cant connect with out replicas and with old security index.
Then I am deleted whole index . And indexed doc one more time to fresh cluster. Then its worked.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.