.kibana index is not created bydefault


#1

Hi all,

After successful installation of ELK_6.2.4 and Kibana_6.2.4,i am starting the kibana and elasticsearch but
befault index(.kibana) is not creating by kibana.

can anyone help me out.


(Christian Dahlqvist) #2

Please look at this post and provide more details about your setup and configuration. Without this I don't think anyone is going to be able to help...


#3

Hi Christian_Dahlqvist,

yes i swa the post of Ajit , we are suffering the same problem.

my configuration details is-

OS: linux
We have deployed using putty command line terminal.
In elasticsearch.yml I have below configuration.

bootstrap.system_call_filter: false
cluster.name: my-application
network.host: localhost
action.auto_create_index: true

In kibana.yml I have below configuration.
elasticsearch.url: "http://AbcElk:9200"
elasticsearch.ssl.verificationMode: "none"

And I am checking .kibana index available or not using dev tool and curl query.
GET /_cat/indices?v
curl localhost:9200/_cat/indices


(Christian Dahlqvist) #4

You Elasticsearch is binding to localhost which means it can't be accessed from other instances. You will need to bind to a non-loopback interface or place Kibana on the same host as Elasticsearch.


(David Pilato) #5

Your code is surprisingly exactly the same.
Are you colleagues?


#6

Hi Dadoonet,

yeah we are colleagues.
please help us to fix this issue.


#7
Preformatted text[2018-06-14T13:21:56,771][INFO ][o.e.n.Node ] [] initializing ...
[2018-06-14T13:21:56,861][INFO ][o.e.e.NodeEnvironment ] [T0fwHgx] using [1] data paths, mounts [[/opt (/dev/mapper/VG00-LV_OPT)]], net usable_space [56.3gb], net total_space [191.8gb], types [ext4]
[2018-06-14T13:21:56,862][INFO ][o.e.e.NodeEnvironment ] [T0fwHgx] heap size [989.8mb], compressed ordinary object pointers [true]
[2018-06-14T13:21:56,865][INFO ][o.e.n.Node ] node name [T0fwHgx] derived from node ID [T0fwHgxCRmWqvQY3dRwZ2g]; set [node.name] to override
[2018-06-14T13:21:56,865][INFO ][o.e.n.Node ] version[6.2.4], pid[1619], build[ccec39f/2018-04-12T20:37:28.497551Z], OS[Linux/2.6.32-696.23.1.el6.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_111/25.111-b14]
[2018-06-14T13:21:56,865][INFO ][o.e.n.Node ] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.0Zc1B78m, -XX:+HeapDumpOnOutOfMemoryError, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/opt/ElasticSearchKibana/elasticsearch-6.2.4, -Des.path.conf=/opt/ElasticSearchKibana/elasticsearch-6.2.4/config]
[2018-06-14T13:21:57,414][INFO ][o.e.p.PluginsService ] [T0fwHgx] loaded module [aggs-matrix-stats]
[2018-06-14T13:21:57,415][INFO ][o.e.p.PluginsService ] [T0fwHgx] loaded module [analysis-common]
[2018-06-14T13:21:57,415][INFO ][o.e.p.PluginsService ] [T0fwHgx] loaded module [ingest-common]
[2018-06-14T13:21:57,415][INFO ][o.e.p.PluginsService ] [T0fwHgx] loaded module [lang-expression]
[2018-06-14T13:21:57,415][INFO ][o.e.p.PluginsService ] [T0fwHgx] loaded module [lang-mustache]
[2018-06-14T13:21:57,415][INFO ][o.e.p.PluginsService ] [T0fwHgx] loaded module [lang-painless]
[2018-06-14T13:21:57,415][INFO ][o.e.p.PluginsService ] [T0fwHgx] loaded module [mapper-extras]
[2018-06-14T13:21:57,415][INFO ][o.e.p.PluginsService ] [T0fwHgx] loaded module [parent-join]
[2018-06-14T13:21:57,415][INFO ][o.e.p.PluginsService ] [T0fwHgx] loaded module [percolator]
[2018-06-14T13:21:57,416][INFO ][o.e.p.PluginsService ] [T0fwHgx] loaded module [rank-eval]
[2018-06-14T13:21:57,416][INFO ][o.e.p.PluginsService ] [T0fwHgx] loaded module [reindex]
[2018-06-14T13:21:57,416][INFO ][o.e.p.PluginsService ] [T0fwHgx] loaded module [repository-url]
[2018-06-14T13:21:57,416][INFO ][o.e.p.PluginsService ] [T0fwHgx] loaded module [transport-netty4]
[2018-06-14T13:21:57,416][INFO ][o.e.p.PluginsService ] [T0fwHgx] loaded module [tribe]
[2018-06-14T13:21:57,416][INFO ][o.e.p.PluginsService ] [T0fwHgx] no plugins loaded
[2018-06-14T13:21:59,483][INFO ][o.e.d.DiscoveryModule ] [T0fwHgx] using discovery type [zen]
[2018-06-14T13:21:59,997][INFO ][o.e.n.Node ] initialized
[2018-06-14T13:21:59,997][INFO ][o.e.n.Node ] [T0fwHgx] starting ...
[2018-06-14T13:22:00,132][INFO ][o.e.t.TransportService ] [T0fwHgx] publish_address {172.21.153.176:9300}, bound_addresses {172.21.153.176:9300}
[2018-06-14T13:22:00,144][INFO ][o.e.b.BootstrapChecks ] [T0fwHgx] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2018-06-14T13:22:03,198][INFO ][o.e.c.s.MasterService ] [T0fwHgx] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {T0fwHgx}{T0fwHgxCRmWqvQY3dRwZ2g}{nRq0k1tJQW6ym8vHwlqJsA}{MUMCHELK01}{172.21.153.176:9300}
[2018-06-14T13:22:03,203][INFO ][o.e.c.s.ClusterApplierService] [T0fwHgx] new_master {T0fwHgx}{T0fwHgxCRmWqvQY3dRwZ2g}{nRq0k1tJQW6ym8vHwlqJsA}{MUMCHELK01}{172.21.153.176:9300}, reason: apply cluster state (from master [master {T0fwHgx}{T0fwHgxCRmWqvQY3dRwZ2g}{nRq0k1tJQW6ym8vHwlqJsA}{MUMCHELK01}{172.21.153.176:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-06-14T13:22:03,217][INFO ][o.e.h.n.Netty4HttpServerTransport] [T0fwHgx] publish_address {172.21.153.176:9200}, bound_addresses {172.21.153.176:9200}
[2018-06-14T13:22:03,217][INFO ][o.e.n.Node ] [T0fwHgx] started
[2018-06-14T13:22:03,292][INFO ][o.e.g.GatewayService ] [T0fwHgx] recovered [0] indices into cluster_state

#8

Hi @Christian_Dahlqvist,

we are Ensure the we have only one ELK and kibabna instance in our server.


(David Pilato) #9

There is no need to open multiple threads like this.
That's a waste of our resources as it is consuming time that we can dedicate to other users.

I'm closing this thread.


(David Pilato) #10