I'm discovering Elastic and I'm trying to setup a filebeat client to read log files and push these log to an Elastic instance.
I'm guided by the tutorial from kibana to add a filebeat data source.
As explained I installed the client, updated the filebeat.yml.
But when I'm trying to run the filebeat.exe setup command I got the following error:
[Error connection to Elasticsearch https://myinstance:443: Connection marked as failed because the onConnect callback failed: cannot retrieve the elasticsearch license: could not extract license information from the server response: could not parse value for expiry time: strconv.Atoi: parsing "1588291199999": value out of range]
#=========================== Filebeat inputs =============================
filebeat.inputs:
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#============================== Dashboards =====================================
#setup.dashboards.enabled: false
#setup.dashboards.url:
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#============================= Elastic Cloud ==================================
# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
cloud.id: "IsetHereMyCloudId"
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
cloud.auth: "Myuser:ISetHereMypasswd"
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
# Enabled ilm (beta) to use index lifecycle management instead daily indices.
#ilm.enabled: false
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
#xpack.monitoring.elasticsearch:
I'm more and more convinced it is related to the cloud instance and the license we have when using a cloud instance. strConv.Atoi('1588291199999') is trying to convert the string into a int. So it raise a out of range exception.
Can I deactivate that check? Is there a workaround? Because I'm not able to use FileBeat for the moment it is very blocking.
Can disable the license check ? waiting for a fix?
I tried to manually enroll the filebeat, I upgraded to Filebeat 6.7.1 but I still have the error when the filebeat agent try to connect to the output (elastic)
Is we got any solution for this issue? I am struggling to resolve this issue.
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://elk1.admin1:9200: Connection marked as failed because the onConnect callback failed: cannot retrieve the elasticsearch license: could not extract license information from the server response: unknown state, received: 'expired']
Also getting this message we have a trial license of 6.7.0 running on Docker, cannot get File Beats or Metric beats to work. Would like to evaluate Elastic search for our enterprise but cannot get basic setup working.
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http:
//elasticURL:9200: Connection marked as failed because the onConnect callback failed: cannot retrieve the e
lasticsearch license: could not extract license information from the server response: could not parse value for expiry t
ime: strconv.Atoi: parsing "1559153437490": value out of range]
I face the same issue with winlogbeat.
Below is the error.
Attempting to connect to Elasticsearch version 6.7.1
2019-05-06T19:15:28.568-0700 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":32062,"time":{"ms":16}},"total":{"ticks":56546,"time":{"ms":16},"value":56546},"user":{"ticks":24484}},"handles":{"open":201},"info":{"ephemeral_id":"7c609f34-05e5-48d1-8a90-d52e34545e01","uptime":{"ms":38640098}},"memstats":{"gc_next":33403904,"memory_alloc":16833768,"memory_total":259203648}},"libbeat":{"config":{"module":{"running":0}},"output":{"read":{"bytes":791},"write":{"bytes":392}},"pipeline":{"clients":3,"events":{"active":4119,"retry":50}}}}}}
2019-05-06T19:15:58.568-0700 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":32062},"total":{"ticks":56546,"value":56546},"user":{"ticks":24484}},"handles":{"open":201},"info":{"ephemeral_id":"7c609f34-05e5-48d1-8a90-d52e34545e01","uptime":{"ms":38670098}},"memstats":{"gc_next":33403904,"memory_alloc":16857632,"memory_total":259227512}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":3,"events":{"active":4119}}},"msg_file_cache":{"ApplicationSize":-1}}}}
2019-05-06T19:16:14.764-0700 ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch([http://10.xx.xx.xx:9200](http://10.xx.xx.xx:9200/))): Connection marked as failed because the onConnect callback failed: cannot retrieve the elasticsearch license: could not extract license information from the server response: could not parse value for expiry time: strconv.Atoi: parsing "1559606399999": value out of range
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://10.106.233.118:9200/: Connection marked as failed because the onConnect callback failed: cannot retrieve the elasticsearch license: could not extract license information from the server response: unknown state, received: 'expired']
Getting the same error when trying to run the docker image of filebeat 7.0.1.
The issue with the out of range value is reported as fixed in the latest releases of the beats.
But some posts in this thread report a different error simply saying the license is expired. That looks unrelated to the issue with the out of range error message and could simply be due to an actually expired license.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.