Sending filebeat data to AWS Elasticsearch service endpoint

Hi,
I have installed filebeat client on AWS EC2 which is configured to push messages to the AWS Elasticsearch service endpoint. Below is the detail of the filebeat yml file.

## Filebeat ###
filebeat:
  # List of prospectors to fetch data.
  prospectors:
    # Each - is a prospector. Below are the prospector specific configurations
    -
      paths:
        - /var/log/*/*.log
        #- c:\programdata\elasticsearch\logs\*

output:

  ### Elasticsearch as output
  elasticsearch:
    # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
    hosts: ["search-domain-XXXXXXX-eu-west-1.es.amazonaws.com:80"]
logging:
  # Send all logging output to syslog. On Windows default is false, otherwise
  # default is true.
  to_syslog: true
  # Write all logging output to files. Beats automatically rotate files if rotateeverybytes
  # limit is reached.
  to_files: true
  # To enable logging to files, to_files option has to be set to true
  files:
    # The directory where the log files will written to.
    path: /var/log/mybeat

    # The name of the files where the logs are written to.
    #name: mybeat

    # Configure log file size limit. If limit is reached, log file will be
    # automatically rotated
    rotateeverybytes: 10485760 # = 10MB

    # Number of rotated log files to keep. Oldest files will be deleted first.
    #keepfiles: 7

  # Sets log level. The default log level is error.
  # Available log levels are: critical, error, warning, info, debug
  level: debug
================================================================================

I am unable to perform the bulk upload to the Elasticsearch. Below is the debug logs for the same.

2016-02-16T18:30:56Z DBG ES Ping(url=http://search-domain-XXXXXXXX.eu-west-1.es.amazonaws.com:80, timeout=1m30s)
2016-02-16T18:30:56Z DBG Ping status code: 200
2016-02-16T18:30:56Z DBG Sending bulk request to http://search--domain-XXXXXXXX.eu-west-1.es.amazonaws.com:80/_bulk
2016-02-16T18:30:56Z ERR Failed to perform any bulk index operations: 400 Bad Request
2016-02-16T18:30:56Z INFO Error publishing events (retrying): 400 Bad Request
2016-02-16T18:30:56Z INFO send fail
2016-02-16T18:30:56Z INFO backoff retry: 2s
2016-02-16T18:30:58Z DBG End of file reached: /var/log/elasticsearch/elasticsearch_index_indexing_slowlog.log; Backoff now.
2016-02-16T18:30:58Z DBG End of file reached: /var/log/elasticsearch/elasticsearch.log; Backoff now.
2016-02-16T18:30:58Z DBG End of file reached: /var/log/elasticsearch/elasticsearch_index_search_slowlog.log; Backoff now.
2016-02-16T18:30:58Z DBG ES Ping(url=http://search--domain-XXXXXXXX.eu-west-1.es.amazonaws.com:80, timeout=1m30s)
2016-02-16T18:30:58Z DBG Ping status code: 200
2016-02-16T18:30:58Z DBG Sending bulk request to http://search--domain-XXXXXXXX.eu-west-1.es.amazonaws.com:80/_bulk
2016-02-16T18:30:58Z ERR Failed to perform any bulk index operations: 400 Bad Request
2016-02-16T18:30:58Z INFO Error publishing events (retrying): 400 Bad Request
2016-02-16T18:30:58Z INFO send fail
2016-02-16T18:30:58Z INFO backoff retry: 4s
2016-02-16T18:31:02Z DBG End of file reached: /var/log/elasticsearch/elasticsearch_index_indexing_slowlog.log; Backoff now.
2016-02-16T18:31:02Z DBG End of file reached: /var/log/elasticsearch/elasticsearch.log; Backoff now.
2016-02-16T18:31:02Z DBG End of file reached: /var/log/elasticsearch/elasticsearch_index_search_slowlog.log; Backoff now.
2016-02-16T18:31:02Z DBG ES Ping(url=http://search--domain-XXXXXXXX.eu-west-1.es.amazonaws.com:80, timeout=1m30s)
2016-02-16T18:31:02Z DBG Ping status code: 200
2016-02-16T18:31:02Z DBG Sending bulk request to http://search--domain-XXXXXXXX.eu-west-1.es.amazonaws.com:80/_bulk
2016-02-16T18:31:02Z ERR Failed to perform any bulk index operations: 400 Bad Request
2016-02-16T18:31:02Z INFO Error publishing events (retrying): 400 Bad Request
2016-02-16T18:31:02Z INFO send fail
2016-02-16T18:31:02Z INFO backoff retry: 8s

2016-02-16T18:31:10Z DBG ES Ping(url=http://search--domain-XXXXXXXX.eu-west-1.es.amazonaws.com:80, timeout=1m30s)
2016-02-16T18:31:10Z DBG Ping status code: 200
2016-02-16T18:31:10Z DBG Sending bulk request to http://search--domain-XXXXXXXX.eu-west-1.es.amazonaws.com:80/_bulk
2016-02-16T18:31:10Z ERR Failed to perform any bulk index operations: 400 Bad Request
2016-02-16T18:31:10Z INFO Error publishing events (retrying): 400 Bad Request
2016-02-16T18:31:10Z INFO send fail

Kindly help me to overcome this issue.
Thanks
Pratik

1 Like

Hello,

What is the reason for sending logs to port 80 and not directly to the elasticsearch/logstash? From what i see in your logs the web server on this aws instance doesn't understand the requests you're sending to it.
You can create firewall rule to allow only your server to access port 9200-9300 for elasticsearch or 5000 for logstash and then you can set the correct port in your filebeat config.

@ngv I am also facing same issue however as per AWS "The service supports HTTP on port 80, but does not support TCP transport."

Does it mean filebeat cannot be used with AWS ElasticSearch service?

1 Like

HTTP sits on top of TCP. You have to configure port 80 (default TCP port used for HTTP) instead of elasticsearch default port 9200.

elasticsearch aws port is 443 use
hosts: ["https://search-domain-XXXXXXX-eu-west-1.es.amazonaws.com:443"]

default HTTP port is 80 and default HTTPS port is 443.

I'm also trying to get filebeat to talk to AWS Elasticsearch and running into the same errors. I'm running filebeat 5.0.0 on Ubuntu 14.04 and my config is basically the same as OP's. With additional debugging enabled, I see entries like this:

2016/10/27 13:13:11.847333 output.go:109: DBG  output worker: publish 50 events
2016/10/27 13:13:11.847391 client.go:615: DBG  ES Ping(url=http://search-domain.us-east-1.es.amazonaws.com:80, timeout=1m30s)
2016/10/27 13:13:11.848870 spooler.go:118: DBG  Flushing spooler because spooler full. Events flushed: 2048
2016/10/27 13:13:11.856899 client.go:639: DBG  Ping status code: 200
2016/10/27 13:13:11.856929 client.go:640: INFO Connected to Elasticsearch version 2.3.2
2016/10/27 13:13:11.856950 output.go:214: INFO Trying to load template for client: http://search-domain.us-east-1.es.amazonaws.com:80
2016/10/27 13:13:11.856971 client.go:655: DBG  HEAD http://search-domain.us-east-1.es.amazonaws.com:80/_template/filebeat  <nil>
2016/10/27 13:13:11.873497 output.go:235: INFO Template already exists and will not be overwritten.
2016/10/27 13:13:11.904051 client.go:232: ERR Failed to perform any bulk index operations: 400 Bad Request
2016/10/27 13:13:11.904085 single.go:91: INFO Error publishing events (retrying): 400 Bad Request
2016/10/27 13:13:11.904109 single.go:156: DBG  send fail

I've tried about every permutation of host configuration I can think of: "http://search-domain...es.amazonaws.com:80", "search-domain...es.amazonaws.com:80", "https://search-domain...es.amazonaws.com:443", etc.

From the log entries, I can see that it's connecting, getting a successful ping, detecting the version of Elasticsearch, but then something is going wrong and it's generating bad requests.

If I curl http://search-domain...es.amazonaws.com/_template/filebeatI can see that there is a filebeat template there that looks correct.

Has anyone succeeded in getting filebeat to talk to AWS Elasticsearch?

Is there anything else I can try?

No idea about AWS Elasticsearch. Which Elasticsearch version are you using? Can you try a manual bulk index request using curl?

I get::

$ curl -XPOST https://search-...amazonaws.com/_bulk --data-binary "@requests"
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"explicit index in bulk is not allowed"}],"type":"illegal_argument_exception","reason":"explicit index in bulk is not allowed"},"status":400}

OK, so I created the cluster with terraform and set "rest.action.multi.allow_explicit_index" = true, which should be right. But now that I dig through the AWS console, it looks like it's set to false. So that's probably the issue.

I've updated it and will try again in a few (AWS ES takes like 20 minutes to process a config change).

Yeah, that was definitely my issue. Thanks for the help.

1 Like