Problem with root ERROR Cannot run subcommand in rally

HI:
I run the rally on ES the first time. And i got the error message like this:
[elsearch@elksearch01 ~]$ esrally

____        ____

/ __ ____ / / / __
/ // / __ `/ / / / / /
/ , / // / / / // /
/
/ ||_,///_, /
/____/

[INFO] Writing logs to /home/elsearch/.rally/benchmarks/races/2016-12-16-09-13-51/local/logs/rally_out.log
[INFO] Racing on track [geonames], challenge [append-no-conflicts] and car [defaults]
[INFO] Preparing for race ... [OK]
[INFO] Rally will delete the benchmark candidate after the benchmark
[ERROR] Cannot race. Could not start node 'rally-node0' within timeout period of 30.0 seconds. Please check the logs in '/home/elsearch/.rally/benchmarks/races/2016-12-16-09-13-51/local/logs' for more details..

The complete log is:
[elsearch@elksearch01 ~]$ more /home/elsearch/.rally/benchmarks/races/2016-12-16-09-13-51/local/logs/rally_out.log
....................
2016-12-16 09:17:41,8 rally.launcher INFO ES launch: ['bin/elasticsearch', '-Enode.name=rally-node0', '-Epath.logs=/home/elsearch/.rally/bench
marks/races/2016-12-16-09-13-51/local/logs/geonames/append-no-conflicts/server']
2016-12-16 09:18:14,210 rally.launcher ERROR Could not start node 'rally-node0' within timeout period of 30.0 seconds.
2016-12-16 09:18:14,916 root ERROR Cannot run subcommand [race].
Traceback (most recent call last):
File "/usr/local/python3.5.1/lib/python3.5/site-packages/esrally/rally.py", line 459, in dispatch_sub_command
racecontrol.run(cfg)
File "/usr/local/python3.5.1/lib/python3.5/site-packages/esrally/racecontrol.py", line 238, in run
raise e
File "/usr/local/python3.5.1/lib/python3.5/site-packages/esrally/racecontrol.py", line 235, in run
pipeline(cfg)
File "/usr/local/python3.5.1/lib/python3.5/site-packages/esrally/racecontrol.py", line 48, in call
self.target(cfg)
File "/usr/local/python3.5.1/lib/python3.5/site-packages/esrally/racecontrol.py", line 166, in from_sources_complete
return race(Benchmark(cfg, mechanic.create(cfg, metrics_store, sources=True, build=True), metrics_store), cfg)
File "/usr/local/python3.5.1/lib/python3.5/site-packages/esrally/racecontrol.py", line 152, in race
benchmark.setup()
File "/usr/local/python3.5.1/lib/python3.5/site-packages/esrally/racecontrol.py", line 61, in setup
self.mechanic.start_engine()
File "/usr/local/python3.5.1/lib/python3.5/site-packages/esrally/mechanic/mechanic.py", line 60, in start_engine
self.cluster = self.launcher.start(selected_car)
File "/usr/local/python3.5.1/lib/python3.5/site-packages/esrally/mechanic/launcher.py", line 276, in start
c = cluster.Cluster([self._start_node(node, car, es) for node in range(car.nodes)], t)
File "/usr/local/python3.5.1/lib/python3.5/site-packages/esrally/mechanic/launcher.py", line 276, in
c = cluster.Cluster([self._start_node(node, car, es) for node in range(car.nodes)], t)
File "/usr/local/python3.5.1/lib/python3.5/site-packages/esrally/mechanic/launcher.py", line 298, in _start_node
process = self._start_process(cmd, env, node_name)
File "/usr/local/python3.5.1/lib/python3.5/site-packages/esrally/mechanic/launcher.py", line 385, in _start_process
raise exceptions.LaunchError("%s Please check the logs in '%s' for more details." % (msg, log_dir))
esrally.exceptions.LaunchError: Could not start node 'rally-node0' within timeout period of 30.0 seconds. Please check the logs in '/home/else
arch/.rally/benchmarks/races/2016-12-16-09-13-51/local/logs' for more details.
2016-12-16 09:18:15,559 rally.main INFO Attempting to shutdown internal actor system.
2016-12-16 09:18:17,450 rally.main INFO Shutdown completed.

ps:
elsearch is a ordinary user not root !
The OS is centos 6.5.
ES version is 5.0.0
Rally version is 0.4.6

What can i do for resolve the error!

Hi @Walter,

really weird. Can you please also check the logs in /home/elsearch/.rally/bench marks/races/2016-12-16-09-13-51/local/logs/geonames/append-no-conflicts/server?

How much memory does the machine have? Did you really just run esrally or rather esrally --distribution-version=5.0.0?

Daniel

HI:
i find some warnning in /home/elsearch/.rally/benchmarks/races/2016-12-16-09-13-51/local/logs/geonames/append-no-conflicts/server/benchmark.local.log ,

like

[2016-12-16T17:18:22,281][WARN ][o.e.b.JNANatives ] unable to install syscall filter:
java.lang.UnsupportedOperationException: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in
at org.elasticsearch.bootstrap.Seccomp.linuxImpl(Seccomp.java:349) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]
at org.elasticsearch.bootstrap.Seccomp.init(Seccomp.java:630) ~[elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]
at org.elasticsearch.bootstrap.JNANatives.trySeccomp(JNANatives.java:215) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSH
OT]
at org.elasticsearch.bootstrap.Natives.trySeccomp(Natives.java:99) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:106) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-S
NAPSHOT]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:177) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:307) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSH
OT]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNA
PSHOT]
at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT
]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAP
SHOT]
at org.elasticsearch.cli.Command.main(Command.java:88) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82) [elasticsearch-6.0.0-alpha1-SNAPSHOT.jar:6.0.0-alpha1-SNAPSHOT]
[2016-12-16T17:18:24,973][INFO ][o.e.n.Node ] [rally-node0] initializing ...
[2016-12-16T17:18:25,447][INFO ][o.e.e.NodeEnvironment ] [rally-node0] using [1] data paths, mounts [[/ (/dev/sda2)]], net usable_space [66
.1gb], net total_space [74.8gb], spins? [possibly], types [ext4]
[2016-12-16T17:18:25,448][INFO ][o.e.e.NodeEnvironment ] [rally-node0] heap size [1.9gb], compressed ordinary object pointers [true]
[2016-12-16T17:18:25,450][INFO ][o.e.n.Node ] [rally-node0] node name [rally-node0], node ID [YLCi7rwxS-aiHkEyG9Gw_g]
[2016-12-16T17:18:25,474][INFO ][o.e.n.Node ] [rally-node0] version[6.0.0-alpha1-SNAPSHOT], pid[11807], build[a0185c8/2016-12-16
T09:16:56.444Z], OS[Linux/2.6.32-431.el6.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_111/25.111-b14]
[2016-12-16T17:18:25,475][WARN ][o.e.n.Node ] [rally-node0] version [6.0.0-alpha1-SNAPSHOT] is a pre-release version of Elastics
earch and is not suitable for production
[2016-12-16T17:18:30,920][INFO ][o.e.p.PluginsService ] [rally-node0] loaded module [aggs-matrix-stats]
.............
[2016-12-16T17:18:30,926][INFO ][o.e.p.PluginsService ] [rally-node0] no plugins loaded
[2016-12-16T17:18:38,219][INFO ][o.e.n.Node ] [rally-node0] initialized
[2016-12-16T17:18:38,220][INFO ][o.e.n.Node ] [rally-node0] starting ...
[2016-12-16T17:18:38,636][INFO ][o.e.t.TransportService ] [rally-node0] publish_address {127.0.0.1:39300}, bound_addresses {[::1]:39300}, {127.0.0.1:39300}
[2016-12-16T17:18:38,646][WARN ][o.e.b.BootstrapChecks ] [rally-node0] max number of threads [2048] for user [elsearch] is too low, increase to at least [4096]
[2016-12-16T17:18:38,647][WARN ][o.e.b.BootstrapChecks ] [rally-node0] system call filters failed to install; check the logs and fix your c
onfiguration or disable system call filters at your own risk
[2016-12-16T17:18:45,580][WARN ][o.e.m.j.JvmGcMonitorService] [rally-node0] [gc][young][3][4] duration [4s], collections [1]/[5.3s], total [4s
]/[6.7s], memory [132.2mb]->[103.6mb]/[1.9gb], all_pools {[young] [100.8mb]->[18.1mb]/[133.1mb]}{[survivor] [16.6mb]->[12.4mb]/[16.6mb]}{[old]
[14.8mb]->[73mb]/[1.8gb]}
[2016-12-16T17:18:45,584][WARN ][o.e.m.j.JvmGcMonitorService] [rally-node0] [gc][3] overhead, spent [4s] collecting in the last [5.3s]
[2016-12-16T17:18:45,628][INFO ][o.e.c.s.ClusterService ] [rally-node0] new_master {rally-node0}{YLCi7rwxS-aiHkEyG9Gw_g}{dqfNbKU6SBGdgrS319W
QxQ}{127.0.0.1}{127.0.0.1:39300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2016-12-16T17:18:45,752][INFO ][o.e.h.HttpServer ] [rally-node0] publish_address {127.0.0.1:39200}, bound_addresses {[::1]:39200}, {127.0.0.1:39200}
[2016-12-16T17:18:45,753][INFO ][o.e.n.Node ] [rally-node0] started
[2016-12-16T17:18:45,910][INFO ][o.e.g.GatewayService ] [rally-node0] recovered [0] indices into cluster_state

But the ES and head is normal。
The picture of ES is:

The machine have 1877M memory。
I just run esrally , not esrally --distribution-version=5.0.0 .
The command is
[elsearch@elksearch01 ~]$ esrally

Hi @Walter,

I don't see any problem in the logs. It seems that the cluster just started fine. The seccomp warning is ok here. However, it may have died because of too little memory (maybe it was killed by the OSes oomkiller?). Elasticsearch already tries to reserve 2GB of memory but you only have (less than) 2GB of memory. Can you try on a larger machine?

And by the way, the screenshot you have posted is from the instance that stores the metrics, not the instance that is benchmarked.

Daniel

Hi @danielmitterdorfer
Thank you for your help!
I will raise the memory of machine to 4GB or more next monday. I will post the result here .
Thank again!

HI
@danielmitterdorfer
I'm sorry to tell you so late.
It worked when I Improve memory .
By the way,how to Handling Relationships is ES。

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.