Failed to send logs to elasticsearch output because failed to create ILM alias

Hello,

I am using metricbeat 7.0.0 and sending data to the elasticsearch output. My elasticsearch version is also 7.0.0

I have a very basic configuration:

metricbeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: true

  # Period on which files under path should be checked for changes
  #reload.period: 10s

setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
  #_source.enabled: false

output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["https://xxx:xxx"]

  # Enabled ilm (beta) to use index lifecycle management instead daily indices.
  #ilm.enabled: false

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  username: "xxx"
  password: "${ES_PWD}"

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

When starting metricbeat, I get the following logs:

    2019-04-30T15:19:49.163Z	INFO	elasticsearch/client.go:734	Attempting to connect to Elasticsearch version 7.0.0
    2019-04-30T15:19:49.200Z	INFO	[index-management.ilm]	ilm/std.go:134	do not generate ilm policy: exists=true, overwrite=false
    2019-04-30T15:19:49.200Z	INFO	[index-management]	idxmgmt/std.go:238	ILM policy successfully loaded.
    2019-04-30T15:19:49.201Z	INFO	[index-management]	idxmgmt/std.go:361	Set setup.template.name to '{metricbeat-7.0.0 {now/d}-000001}' as ILM is enabled.
    2019-04-30T15:19:49.201Z	INFO	[index-management]	idxmgmt/std.go:366	Set setup.template.pattern to 'metricbeat-7.0.0-*' as ILM is enabled.
    2019-04-30T15:19:49.201Z	INFO	[index-management]	idxmgmt/std.go:400	Set settings.index.lifecycle.rollover_alias in template to {metricbeat-7.0.0 {now/d}-000001} as ILM is enabled.
    2019-04-30T15:19:49.201Z	INFO	[index-management]	idxmgmt/std.go:404	Set settings.index.lifecycle.name in template to {metricbeat-7.0.0 map[policy:{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}]} as ILM is enabled.
    2019-04-30T15:19:49.218Z	INFO	template/load.go:129	Template already exists and will not be overwritten.
    2019-04-30T15:19:49.218Z	INFO	[index-management]	idxmgmt/std.go:272	Loaded index template.
    2019-04-30T15:19:57.610Z	INFO	[monitoring]	log/log.go:144	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":98921,"time":{"ms":31}},"total":{"ticks":216561,"time":{"ms":203},"value":216561},"user":{"ticks":117640,"time":{"ms":172}}},"handles":{"open":417},"info":{"ephemeral_id":"6ce27a28-e6ef-412a-9dc1-b5675a608bf9","uptime":{"ms":5701111}},"memstats":{"gc_next":59560080,"memory_alloc":30130112,"memory_total":256062656,"rss":32768}},"libbeat":{"config":{"module":{"running":0},"reloads":3},"output":{"read":{"bytes":2459},"write":{"bytes":1514}},"pipeline":{"clients":3,"events":{"active":4119,"retry":8}}}}}}
    2019-04-30T15:20:23.175Z	ERROR	pipeline/output.go:100	Failed to connect to backoff(elasticsearch(https://xxx:xxx)): Connection marked as failed because the onConnect callback failed: failed to create alias: {"error":"Incorrect HTTP method for uri [/<metricbeat-7.0.0-{now/d}-000001>] and method [PUT], allowed: [POST]","status":405}: 405 Method Not Allowed: {"error":"Incorrect HTTP method for uri [/<metricbeat-7.0.0-{now/d}-000001>] and method [PUT], allowed: [POST]","status":405}
    2019-04-30T15:20:23.175Z	INFO	pipeline/output.go:93	Attempting to reconnect to backoff(elasticsearch(https://xxx:xxx)) with 131 reconnect attempt(s)

I run elasticsearch behind an nginx proxy. Could it be the problem ?

I can't see why the returned error is 405. The uri used for alias creation seems very strange /<metricbeat-7.0.0-{now/d}-000001> , I see them arriving to nginx with special characters encoded.

1 Like

Hi @razafinr :slight_smile:

I'm not fully sure about the root of the issue. I'll try removing Nginx, just to see if it works, if not we'll dig deeper into the issue :wink:

In short, the pattern you see there <metricbeat-7.0.0-{now/d}-000001> (I'm not sure if those <> should be there but I'd say they shouldn't) is the one that Metricbeat uses when ILM is enabled so it might be an ILM issue too.

But let's start with Nginx and post here the results, please. As far as we are concern, we don't have any ILM related issues right now. Also, take a look at this issue in Github in case you see something familiar (maybe our docs just needs and update https://github.com/elastic/beats/issues/11347#issuecomment-476319686)

Oh! Please, attach your metricbeat and module configs, just in case we can see something

Hi @Mario_Castro I experienced the same problem to the one which @razafinr reported earlier, and I came across your response which was helpful in identifying one part of the issue I was facing. Like razafinr, our ES is behind an nginx proxy - I don't think this is relevant to the issue as described, but just in case it is, I thought I would note it.

Having read your point about the issue potentially being ILM related, I changed the ILM configuration from its default, to setup.ilm.enabled: false. After I had done that, I was able to successfully run setup

/etc/metricbeat$ sudo /usr/share/metricbeat/bin/metricbeat setup
Index setup complete.
Loading dashboards (Kibana must be running and reachable)
Skipping loading dashboards, No directory /usr/share/metricbeat/bin/kibana/7

That no longer reports the above error. It does however report the "No directory" issue, above.

1 Like

I ought to add to my response, that even though this issue is resolved, no index has been created on the ES server which I am targeting :frowning: The Kibana dashboards have been added, however. Frustratingly, even though I added configuration to get log entries written to a specific metricbeat log file, the log entries are still being written to syslog.

Hi guys, I figures out the problem. It was Indeed due to nginx misconfiguration , my proxypass included an extra / in the end so the request incoming to the proxy were forwarded with url decoding, causing the special characters to be sent to elasticsearch. Just removing the extra / , the problem was solved. As indicated in documentation :

A request URI is passed to the server as follows:

  • If the proxy_pass directive is specified with a URI, then when a request is passed to the server, the part of anormalized request URI matching the location is replaced by a URI specified in the directive:

location /name/ { proxy_pass http://127.0.0.1/remote/; }

  • If proxy_pass is specified without a URI, the request URI is passed to the server in the same form as sent by a client when the original request is processed, or the full normalized request URI is passed when processing the changed URI:

location /some/path/ { proxy_pass http://127.0.0.1; }

I hope it helps if someone has the same issue.

6 Likes

This needs more upvotes. Such a silly issue.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.