X-Pack Watches not initializing properly

I have ES 5.4.1 installed and the X-Pack plugin is also installed. I am trying to use Watcher, but it is not setting itself up properly.

When I call 'POST _xpack/watcher/_start' I get this error in the log files:

[2017-06-20T07:15:04,815][WARN ][r.suppressed ] path: /_xpack/watcher/_start, params: {}
org.elasticsearch.transport.RemoteTransportException: [Node1][10.110.0.10:9300][cluster:admin/xpack/watcher/service]
Caused by: org.elasticsearch.transport.ActionNotFoundTransportException: No handler for action [cluster:admin/xpack/watcher/service]
at org.elasticsearch.transport.TcpTransport.handleRequest(TcpTransport.java:1471) [elasticsearch-5.4.1.jar:5.4.1]
at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1369) [elasticsearch-5.4.1.jar:5.4.1]
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74) [transport-netty4-5.4.1.jar:5.4.1]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.11.Final.jar:4.1.11.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.11.Final.jar:4.1.11.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.11.Final.jar:4.1.11.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.11.Final.jar:4.1.11.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) [netty-codec-4.1.11.Final.jar:4.1.11.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) [netty-codec-4.1.11.Final.jar:4.1.11.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) [netty-codec-4.1.11.Final.jar:4.1.11.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) ~[netty-transport-4.1.11.Final.jar:4.1.11.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.11.Final.jar:4.1.11.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.11.Final.jar:4.1.11.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.11.Final.jar:4.1.11.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.11.Final.jar:4.1.11.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.11.Final.jar:4.1.11.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.11.Final.jar:4.1.11.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.11.Final.jar:4.1.11.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.11.Final.jar:4.1.11.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) [netty-transport-4.1.11.Final.jar:4.1.11.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.11.Final.jar:4.1.11.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.11.Final.jar:4.1.11.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.11.Final.jar:4.1.11.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.11.Final.jar:4.1.11.Final]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]

And when I send a 'GET .watches' request, I get back:

{
"error": {
"root_cause": [
{
"type": "index_not_found_exception",
"reason": "no such index",
"index_uuid": "na",
"resource.type": "index_or_alias",
"resource.id": ".watches",
"index": ".watches"
}
],
"type": "index_not_found_exception",
"reason": "no such index",
"index_uuid": "na",
"resource.type": "index_or_alias",
"resource.id": ".watches",
"index": ".watches"
},
"status": 404
}

So it looks like the .watches index is not being setup. Is there something explicit that I have to do in order to get the .watches index setup properly?

Hey,

How many nodes do you have? Have you installed x-pack on all nodes? Did you restart after installation?

Can you show the output of

GET _cat/plugins?v
GET _cat/nodes?v

--Alex

We have: 3 Master/Data nodes and 2 Client nodes...
X-Pack is on all nodes ...
Restart was done after upgrade from 5.3 ... as well as being restarted recently after setting 'xpack.watcher.enabled: true' in the elasticsearch.yml file.

Here is the output you requested ...

GET _cat/plugins?v
name   component version
FE2    x-pack    5.4.1
Node3  x-pack    5.4.1
FE1    x-pack    5.4.1
Node1  x-pack    5.4.1
Node2  x-pack    5.4.1

GET _cat/nodes?v
ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.110.0.26             8          55   1                          -         -      FE2
10.110.0.7             25          54   5                          mdi       -      Node3
10.110.0.25            11          67   4                          -         -      FE1
10.110.0.10            86          77  11                          mdi       *      Node1
10.110.0.11            35          78   9                          mdi       -      Node2

Hey,

can you share the settings or the cluster state? It looks as if Node1 still has watcher disabled, given the endpoint cannot be found.

--Alex

Ran 'GET _nodes' and found that Node1 didn't have watcher enabled like you suspected. Turned out the elasticsearch.yml configuration file wasn't updated properly. I updated the configuration and restarted and it looks like the .watches index was created.

Watcher stats now returns this:

{
  "watcher_state": "started",
  "watch_count": 4,
  "execution_thread_pool": {
    "queue_size": 0,
    "max_size": 56
  },
  "manually_stopped": false
}

Thanks for the help!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.