Hello,
I am using https://github.com/elastic/stack-docker repository. While following doc under README.md, I cloned the Repository, used docker-compose up and services looks good.
While I checked the logs:
root@ubuntu:~# docker logs f9d8
..
[2018-02-12T17:37:46,137][INFO ][o.e.n.Node ] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.cgroups.hierarchy.override=/, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config]
[2018-02-12T17:37:48,004][INFO ][o.e.p.PluginsService ] [jrXdhxD] loaded module [aggs-matrix-stats]
[2018-02-12T17:37:50,906][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/121] [Main.cc@128] controller (64 bit): Version 6.1.3 (Build 49803b19919585) Copyright (c) 2018 Elasticsearch BV
[2018-02-12T17:37:51,461][INFO ][o.e.d.DiscoveryModule ] [jrXdhxD] using discovery type [zen]
[2018-02-12T17:37:53,499][INFO ][o.e.n.Node ] initialized
[2018-02-12T17:37:53,501][INFO ][o.e.n.Node ] [jrXdhxD] starting ...
[2018-02-12T17:37:54,071][INFO ][o.e.t.TransportService ] [jrXdhxD] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2018-02-12T17:37:54,132][WARN ][o.e.b.BootstrapChecks ] [jrXdhxD] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2018-02-12T17:37:57,203][INFO ][o.e.c.s.MasterService ] [jrXdhxD] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {jrXdhxD}{jrXdhxDiT8GKtKltsG6Fng}{CrSPn47TR-uBXotjRqhofQ}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=12598865920, ml.max_open_jobs=20, ml.enabled=true}
[2018-02-12T17:37:57,215][INFO ][o.e.c.s.ClusterApplierService] [jrXdhxD] new_master {jrXdhxD}{jrXdhxDiT8GKtKltsG6Fng}{CrSPn47TR-uBXotjRqhofQ}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=12598865920, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {jrXdhxD}{jrXdhxDiT8GKtKltsG6Fng}{CrSPn47TR-uBXotjRqhofQ}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=12598865920, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-02-12T17:37:57,268][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [jrXdhxD] publish_address {100.98.26.129:9200}, bound_addresses {[::]:9200}
[2018-02-12T17:37:57,269][INFO ][o.e.n.Node ] [jrXdhxD] started
[2018-02-12T17:37:57,491][INFO ][o.e.g.GatewayService ] [jrXdhxD] recovered [0] indices into cluster_state
[2018-02-12T17:37:58,990][INFO ][o.e.l.LicenseService ] [jrXdhxD] license [5c7e79b3-d1e9-4f07-9793-ea3f20ae698e] mode [trial] - valid
[2018-02-12T17:38:04,015][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.monitoring-es-6-2018.02.12] creating index, cause [auto(bulk api)], templates [.monitoring-es], shards [1]/[1], mappings [doc]
[2018-02-12T17:38:04,482][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.watches] creating index, cause [auto(bulk api)], templates [.watches], shards [1]/[1], mappings [doc]
[2018-02-12T17:38:05,030][INFO ][o.e.c.m.MetaDataMappingService] [jrXdhxD] [.watches/3a2eYZLURqWSE8If4ojDQg] update_mapping [doc]
[2018-02-12T17:39:05,388][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.triggered_watches] creating index, cause [auto(bulk api)], templates [.triggered_watches], shards [1]/[1], mappings [doc]
[2018-02-12T17:39:05,815][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.monitoring-alerts-6] creating index, cause [auto(bulk api)], templates [.monitoring-alerts], shards [1]/[1], mappings [doc]
[2018-02-12T17:39:05,897][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.watcher-history-7-2018.02.12] creating index, cause [auto(bulk api)], templates [.watch-history-7], shards [1]/[1], mappings [doc]
[2018-02-12T17:39:06,195][INFO ][o.e.c.m.MetaDataMappingService] [jrXdhxD] [.watcher-history-7-2018.02.12/mxj9Ak5aQmqvoGPks67j-A] update_mapping [doc]
[2018-02-12T17:39:06,283][INFO ][o.e.c.m.MetaDataMappingService] [jrXdhxD] [.watcher-history-7-2018.02.12/mxj9Ak5aQmqvoGPks67j-A] update_mapping [doc]
[2018-02-13T00:00:04,457][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.monitoring-es-6-2018.02.13] creating index, cause [auto(bulk api)], templates [.monitoring-es], shards [1]/[1], mappings [doc]
[2018-02-13T00:00:19,346][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.watcher-history-7-2018.02.13] creating index, cause [auto(bulk api)], templates [.watch-history-7], shards [1]/[1], mappings [doc]
[2018-02-13T00:00:19,451][INFO ][o.e.c.m.MetaDataMappingService] [jrXdhxD] [.watcher-history-7-2018.02.13/y1cnVrseR3atoRioU811gw] update_mapping [doc]
[2018-02-13T00:00:30,855][INFO ][o.e.c.m.MetaDataMappingService] [jrXdhxD] [.watcher-history-7-2018.02.13/y1cnVrseR3atoRioU811gw] update_mapping [doc]
[2018-02-13T01:38:00,001][INFO ][o.e.x.m.MlDailyMaintenanceService] triggering scheduled [ML] maintenance tasks
[2018-02-13T01:38:00,004][INFO ][o.e.x.m.a.DeleteExpiredDataAction$TransportAction] [jrXdhxD] Deleting expired data
[2018-02-13T01:38:00,081][INFO ][o.e.x.m.a.DeleteExpiredDataAction$TransportAction] [jrXdhxD] Completed deletion of expired data
[2018-02-13T01:38:00,083][INFO ][o.e.x.m.MlDailyMaintenanceService] Successfully completed [ML] maintenance tasks
curl -XPUT 'localhost:9200/idx'
{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication token for REST request [/idx]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication token for REST request [/idx]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}root@ubuntu:~/openusm/logging#
I added the below entry for taking care of RAM:
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-platinum:${TAG}
container_name: elasticsearch
network_mode: host
environment: ['http.host=0.0.0.0', 'transport.host=127.0.0.1', 'ELASTIC_PASSWORD=${ELASTIC_PASSWORD}']
environment:
ES_JAVA_OPTS: "-Xmx4g -Xms4g"
ports: ['127.0.0.1:9200:9200']
Looks to me something to do with Xpack. I want to use Machine Learning and hence platinum image should be good enough.
I checked .env file under the same directory and it does show me "changeme" as ELASTIC_PASSWORD. I changed it new password but still the issue persist.