I have installed filebeat to send logs to a production ECE deployment.
Filebeat was installed successfully and I can see it is harvesting files, however, it is unable to ship them to ECE.
These are the logs:
Feb 19 14:52:49 hrc-ece-centos74 filebeat[47017]: 2020-02-19T14:52:49.148+0400 ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(https://4f02406a0dashdiasd33a23725be1f.dns:9243)): Get https://4f02406a0dashdiasd33a23725be1f.dns:9243: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Feb 19 14:52:49 ece-centos74 filebeat[47017]: 2020-02-19T14:52:49.149+0400 INFO pipeline/output.go:93 Attempting to reconnect to backoff(elasticsearch(https://4f02406a0dashdiasd33a23725be1f.dns:9243)) with 75 reconnect attempt(s)
Feb 19 14:52:49 ece-centos74 filebeat[47017]: 2020-02-19T14:52:49.149+0400 INFO [publisher] pipeline/retry.go:196 retryer: send unwait-signal to consumer
Feb 19 14:52:49 ece-centos74 filebeat[47017]: 2020-02-19T14:52:49.149+0400 INFO [publisher] pipeline/retry.go:198 done
Feb 19 14:52:49 ece-centos74 filebeat[47017]: 2020-02-19T14:52:49.149+0400 INFO [publisher] pipeline/retry.go:173 retryer: send wait signal to consumer
Feb 19 14:52:49 ece-centos74 filebeat[47017]: 2020-02-19T14:52:49.149+0400 INFO [publisher] pipeline/retry.go:175 done
This is my filebeat.yml file
########Filebeat Configuration Example ###############
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
enabled: true
paths:
- var/log/messages
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
output.elasticsearch.index: "test-%{[agent.version]}-%{+yyyy.MM.dd}"
setup.template.name: "test-case"
setup.template.pattern: "test-*"
#================================ Outputs =====================================
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["4f02406a0d9a3yfhdsufdsf23725be1f.dns:9243"]
#index: "test-%{[agent.version]}-%{+yyyy.MM.dd}"
# Protocol - either `http` (default) or `https`.
protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "changeme"
#----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
Does curl -k -u elastic:changeme 'https://$CLUSTER_ID.dns:9243' from the same host work? If not what does it return (-vvv)
My guess would be that there's a firewall or something in the way based on Client.Timeout exceeded while awaiting headers ... timeout would indicate that something is blocking the connection
[root@ece-centos74 bin]# curl -k -u elastic:changeme 'https://$CLUSTER_ID.dns:9243'
curl: (7) Failed connect to $CLUSTER_ID.dns:9243; Connection timed out
[root@hrc-ece-centos74 bin]# curl -k -u elastic:changeme 'https://$CLUSTER_ID.dns:9243' -vvv
* About to connect() to $CLUSTER_ID.dns:9243 (#0)
* Trying $IP_ADDRESS...
I have resolved the issue now. The host was trying to connect to an IP it doesn't have access to. I've put in a manual entry to my etc/hosts file which routes the correct IP to my DNS entry. After doing so,
[root@hrc-ece-centos74 ~]# curl -k -u elastic:changeme 'https://$CLOUD_ID.dns:9243' -vvv
* About to connect() to CLOUD_ID.dns port 9243 (#0)
* Trying $IP_ADDRESS...
* Connected to $CLOUD_ID.dns ($IP_ADDRESS) port 9243 (#0)
Feb 23 16:55:42 ece-centos74 filebeat: 2020-02-23T16:55:42.524+0400#011ERROR#011pipeline/output.go:100#011Failed to connect to backoff(elasticsearch(https://$CLOUD_ID.dns:9243)): Get https://$CLOUD_ID.dns:9243: x509: certificate signed by unknown authority
Feb 23 16:55:42 ece-centos74 filebeat: 2020-02-23T16:55:42.524+0400#011INFO#011pipeline/output.go:93#011Attempting to reconnect to backoff(elasticsearch(https://$CLOUD_ID.dns:9243)) with 8 reconnect attempt(s)
Feb 23 16:55:42 centos74 filebeat: 2020-02-23T16:55:42.524+0400#011INFO#011[publisher]#011pipeline/retry.go:196#011retryer: send unwait-signal to consumer
Feb 23 16:55:42 hrc-ece-centos74 filebeat: 2020-02-23T16:55:42.524+0400#011INFO#011[publisher]#011pipeline/retry.go:198#011 done
Feb 23 16:55:42 ece-centos74 filebeat: 2020-02-23T16:55:42.524+0400#011INFO#011[publisher]#011pipeline/retry.go:173#011retryer: send wait signal to consumer
Feb 23 16:55:42 ece-centos74 filebeat: 2020-02-23T16:55:42.524+0400#011INFO#011[publisher]#011pipeline/retry.go:175#011 done
Feb 23 16:55:42 ece-centos74 filebeat: 2020-02-23T16:55:42.537+0400#011DEBUG#011[elasticsearch]#011elasticsearch/client.go:737#011Ping request failed with: Get https://$CLOUD_ID.dns:9243: x509: certificate signed by unknown authority
I am still able to curl successfully and in ECE, my certificates are fine. The certificates are single sign-on and I created them with elasticsearch-certutil on a different machine, NOT on one of my ECE machines.
The configuration below produces the above logs where I am getting a certificate error.
###################### Filebeat Configuration Example #########################
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
- /var/log/elasticsearch/elasticsearch-*.json
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
index: "filebeat-test"
setup.template:
enabled: false
# Protocol - either `http` (default) or `https`.
protocol: "https"
ssl.verification_mode: "none"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "changeme"
#----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["https://$CLOUD_ID.dns:9243"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
This configuration below DOES NOT produce the above logs and I can see the index in elasticsearch:
###################### Filebeat Configuration Example #########################
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/.log #- c:\programdata\elasticsearch\logs*
- /var/log/elasticsearch/elasticsearch-.json
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat. #-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"] #index: "filebeat-test" #setup.template: #enabled: false
# Protocol - either http (default) or https.
protocol: "https"
ssl.verification_mode: "none"
# Authentication credentials - either API key or username/password. #api_key: "id:api_key"
username: "elastic"
password: "changeme" #----------------------------- Logstash output -------------------------------- #output.logstash:
# The Logstash hosts #hosts: ["https://$CLOUD_ID.dns:9243"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication #ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key #ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug #logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use [""]. Examples of other selectors are "beat",
# "publish", "service". #logging.selectors: [""]
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
This filebeat.yml file (the only difference is that I am trying to change the index name) gives me the error messages above - x509: certificate signed by unknown authority
###################### Filebeat Configuration Example #########################
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
- /var/log/elasticsearch/elasticsearch-*.json
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["https://$CLOUD_ID.dns:9243"]
index: "filebeat-test"
setup.template:
name: "filebeat"
pattern: "filebeat*"
enabled: "false"
# Protocol - either `http` (default) or `https`.
protocol: "https"
ssl.verification_mode: "none"
ssl.enabled: "true"
ssl.certificate_authorities: ["/usr/share/elasticsearch/ca/ca/ca.crt"]
ssl.certificate: "/usr/share/elasticsearch/ca/instance/instance.crt"
ssl.key: "/usr/share/elasticsearch/ca/instance/instance.key"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "changeme"
#----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
This filebeat.yml file (the only difference is that I am using the default index name) gives me NO error messages and I can see the index in elasticsearch plus the logs
###################### Filebeat Configuration Example #########################
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
- /var/log/elasticsearch/elasticsearch-*.json
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["https://$CLOUD_ID.dns:9243"]
#index: "filebeat-test"
#setup.template:
#name: "filebeat"
#pattern: "filebeat*"
#enabled: "false"
# Protocol - either `http` (default) or `https`.
protocol: "https"
ssl.verification_mode: "none"
ssl.enabled: "true"
ssl.certificate_authorities: ["/usr/share/elasticsearch/ca/ca/ca.crt"]
ssl.certificate: "/usr/share/elasticsearch/ca/instance/instance.crt"
ssl.key: "/usr/share/elasticsearch/ca/instance/instance.key"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "changeme"
#----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
I concatenated them and uploaded it successfully into the Cloud UI
I then copied the certificates into this server I am trying to send logs from. The path to the certificates are referenced in the filebeat.yml
For those who had the same issue as me, this was my final filebeat.yml file
###################### Filebeat Configuration Example #########################
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
- /var/log/elasticsearch/elasticsearch-*.json
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["https://$CLOUD_ID.dns:9243"]
index: "filebeat-test"
# Protocol - either `http` (default) or `https`.
protocol: "https"
ssl.verification_mode: "none"
ssl.enabled: "true"
ssl.certificate_authorities: ["/usr/share/elasticsearch/ca/ca/ca.crt"]
ssl.certificate: "/usr/share/elasticsearch/ca/instance/instance.crt"
ssl.key: "/usr/share/elasticsearch/ca/instance/instance.key"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "changeme"
setup.template:
name: "filebeat"
pattern: "filebeat*"
enabled: "false"
#----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.