401 Error when using webhook in Watcher

Hi all,
what I want to test is firing a message on slack using a stackstorm action everytime logstash receives a BGP syslog message from a network device...right now, my implementation successfully collects the logs and I can see them on Kibana, but I can't make the Watcher to work:

PUT _xpack/watcher/watch/my_demo_watch
{
  "trigger" : { 
    "schedule" : { "interval" : "5s" }
  },
  "input" : { 
    "search": {
      "request": {
        "indices": "logstash-*",
        "body": {
          "query": {  
            "bool": {
              "must": {
                "match_phrase": { 
                  "message": "BGP" 
                }
              },
              "filter" : {
                "range": {
                  "@timestamp": {    
                      "from": "now-60s",
                      "to": "now"
                  }
                }
              }
            }
          } 
        }
      }
    }
  },
  "actions" : {
    "my_webhook" : {
      "webhook" : {
        "method" : "POST",
        "url": "https://localhost/api/v1/webhooks/elk_link_flap?st2-api-key=xxx",
        "headers": {
          "Content-Type": "application/json"
        },
        "body" : "{\"syslog\": \"{{ message }}\"}"
      }
   }
  }
 }

The error I get on elasticsearch logs is this
[2017-02-14T20:25:31,805][WARN ][o.e.x.w.a.w.ExecutableWebhookAction] [-7FDEie] received http status [401] when connecting to watch action [my_demo_watch/webhook/my_webhook]

I've disabled SSL verification by including the following lines into elasticsearch.yml

xpack.ssl.verification_mode: none
xpack.http.ssl.verification_mode: none

I've tested the st2 webhook with cURL and it works (disabling the verification with -k)...anyone seeing any glaring issue here?

Hey

This issue has nothing to do with SSL, as it returns a valid HTTP error - this implies that HTTP + SSL is working as expected.

The error code indicates that there is an issue with the authentication of your request (in your example you did not specify any kind of authentication). So maybe that is the culprit.

Hope this helps.

-Alex

Thanks for your reply! Now I have another issue :frowning:

I have to keep SSL active for st2 and so I tried this guide to set it up. I created ssl entries for the node where st2 is deployed and for localhost as well.

Anyway, when I try to restart elasticsearch it returns this error:

[2017-02-15T20:11:53,805][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [DTUIof2] caught exception while handling client http traffic, closing connection [id: 0x4edd5db6, L:0.0.0.0/0.0.0.0:9200 ! R:/127.0.0.1:60818]
io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 48454144202f20485454502f312e310d0a417574686f72697a6174696f6e3a2042617369632061326c69595735684f6d4e6f5957356e5a57316c0d0a486f73743a206c6f63616c686f73743a393230300d0a436f6e74656e742d4c656e6774683a20300d0a436f6e6e656374696f6e3a206b6565702d616c6976650d0a0d0a
        at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:968) [netty-handler-4.1.7.Final.jar:4.1.7.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411) [netty-codec-4.1.7.Final.jar:4.1.7.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248) [netty-codec-4.1.7.Final.jar:4.1.7.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:527) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:481) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441) [netty-transport-4.1.7.Final.jar:4.1.7.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.7.Final.jar:4.1.7.Final]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]

My elasticsearch.yml file looks as follow:

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: localhost
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

xpack.ssl.key: /etc/elasticsearch/x-pack/ntc/ntc.key
xpack.ssl.certificate: /etc/elasticsearch/x-pack/ntc/ntc.crt
xpack.ssl.certificate_authorities: [ "/etc/elasticsearch/x-pack/ca/ca.crt" ]

xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.enabled: true

This is driving me crazy....do you have any suggestion to overcome it?

can you explain what you mean with st2 and create some more context please?

Sorry, st2 means stackstorm...the context si the same of my first post: logstash is collecting logs from network devices, correctly showing them on kibana..what I want is to use a webhook to activate a stackstorm action.

Let me know if you need more info

Maybe the issue is related to how SSL certs are generated? I did some tests with the generated certs and cURL and all failed.

`curl -X POST --cacert ntc.crt https://ntc/api/v1/webhooks/elk_link_flap?st2-api-key=xxx -H "Content-Type: application/json" --data '{"trigger": "mypack.mytrigger", "payload": {"attribute1": "value1"}}'`

I tried generating the cert with certgen as suggested here https://www.elastic.co/guide/en/x-pack/current/ssl-tls.html and also by manually exporting it from chrome, but it still fails...I also tested with postman after registering the cert in chrome and it worked instead.

I just can't figure it out :disappointed:

I might be repeating myself, but this is not an issue, where you need to enable or disable TLS on the elasticsearch side. This will not solve your problem. The configuration of the HTTP client in regards with authentication was the issue - when you are getting back a HTTP 401 error. You may need to add more authentication information to the webhook you are using to send data to stackstorm.

I dont know enough about stackstorm to know, if the API key you provided is sufficient. You could check the watch history and see if there is more information. Or paste the output of the execute watch API here.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.