Failed to parse field "tls.certificate_not_valid_before" from heartbeat


I have installed a clean installation of Elasticsearch 6.5.4 and heartbeat 6.5.4(Elasticsearch uses HTTPS).
My heartbeat config is as follows:
- type: http

  # List or urls to query
  urls: ["https://myurl:9200"]
  username: "beats_system"
  password: "${output.elasticsearch.password}"
  ssl.certificate_authorities: ["/logserver/applications/pki/myCA.crt"]
  check.request.method: GET
  check.response.status: 200

  # Configure task schedule
  schedule: '@every 60s'

After starting heartbeat I get the following error in the heartbeat log:
2019-01-25T07:59:53.007+0100 WARN elasticsearch/client.go:521 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0x3295cd29, ext:63683996391, loc:(*time.Location)(0x1fc6920)}, Meta:common.MapStr(nil), Fields:common.MapStr{"monitor":map[string]interface {}{"host":"myurl", "ip":"XXX.XXX.XXX.XXX", "duration":map[string]interface {}{"us":0x26da6}, "status":"down", "scheme":"https", "id":"http@https://myurl:9200", "name":"http", "type":"http"}, "resolve":map[string]interface {}{"host":"myurl", "ip":"XXX.XXX.XXX.XXX", "rtt":map[string]interface {}{"us":0x1b8}}, "beat":map[string]interface {}{"hostname":"myurl", "version":"6.5.4", "name":"myurl"}, "host":map[string]interface {}{"containerized":true, "name":"myurl", "architecture":"x86_64", "os":map[string]interface {}{"codename":"Maipo", "platform":"rhel", "version":"7.6 (Maipo)", "family":""}, "id":"b7ab972bc56541c3aabfab009d16ad0d"}, "error":map[string]interface {}{"type":"validate", "message":"received status code 401 expecting 200"}, "http":map[string]interface {}{"url":"https://myurl:9200", "rtt":map[string]interface {}{"content":map[string]interface {}{"us":0x20}, "total":map[string]interface {}{"us":0x26a4b}, "write_request":map[string]interface {}{"us":0x5a}, "response_header":map[string]interface {}{"us":0x1ca21}, "validate":map[string]interface {}{"us":0x1ca42}}, "response":map[string]interface {}{"status_code":0x191}}, "tcp":map[string]interface {}{"port":0x23f0, "rtt":map[string]interface {}{"connect":map[string]interface {}{"us":0xcb}}}, "tls":map[string]interface {}{"certificate_not_valid_before":map[string]interface {}(nil), "certificate_not_valid_after":map[string]interface {}(nil), "rtt":map[string]interface {}{"handshake":map[string]interface {}{"us":0x9ea2}}}}, Private:interface {}(nil)}, Flags:0x0} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse field [tls.certificate_not_valid_before] of type [date]","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:129"}}

Can anyone advise what I am doing wrong?

Can you try the just released 6.6 version? This should now be fixed.

@Andrew_Cholakian1 an you explain why this is fixed in 6.6? Can you point to a card or issue regarding this problem?

1 Like

Which product must be upgraded? Is it enough to upgrade the heartbeat so I can test if this is fixed or do I have to upgrade ElasticSearch?

Only heartbeat needs to be upgraded

Do I have to do anything else? I upgraded to heartbeat 6.6, added my configuration and I still get:

Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0x1894b8a7, ext:63684886235, loc:(*time.Location)(0x1fe0a20)}, Meta:common.MapStr(nil), Fields:common.MapStr{"host":map[string]interface {}{"name":"host", "id":"b7ab972bc56541c3aabfab009d16ad0d", "containerized":true, "architecture":"x86_64", "os":map[string]interface {}{"family":"", "name":"Red Hat Enterprise Linux Server", "codename":"Maipo", "platform":"rhel", "version":"7.6 (Maipo)"}}, "monitor":map[string]interface {}{"scheme":"https", "id":"http@https://host:9200", "name":"http", "type":"http", "host":"host", "ip":"XXX.XXX.XXX.XXX", "duration":map[string]interface {}{"us":0x20e2b}, "status":"down"}, "resolve":map[string]interface {}{"host":"", "ip":"", "rtt":map[string]interface {}{"us":0x288}}, "error":map[string]interface {}{"message":"received status code 401 expecting 200", "type":"validate"}, "http":map[string]interface {}{"rtt":map[string]interface {}{"content":map[string]interface {}{"us":0x19}, "total":map[string]interface {}{"us":0x20ae3}, "write_request":map[string]interface {}{"us":0x31}, "response_header":map[string]interface {}{"us":0x1d2bd}, "validate":map[string]interface {}{"us":0x1d2d6}}, "response":map[string]interface {}{"status_code":0x191}, "url":"https://host:9200"}, "tcp":map[string]interface {}{"rtt":map[string]interface {}{"connect":map[string]interface {}{"us":0x433}}, "port":0x23f0}, "tls":map[string]interface {}{"certificate_not_valid_before":map[string]interface {}(nil), "certificate_not_valid_after":map[string]interface {}(nil), "rtt":map[string]interface {}{"handshake":map[string]interface {}{"us":0x335b}}}, "beat":map[string]interface {}{"hostname":"host", "version":"6.6.0", "name":"host"}}, Private:interface {}(nil)}, Flags:0x0} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse field [tls.certificate_not_valid_before] of type [date]","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:1112"}}

I should add that the server has a valid from and valid until set so I am not sure why heartbeat might not read them...

Hmmm, this is a different error. I'm unable to reproduce it myself. It seems that bad data is being sent to ES.

Can you run heartbeat with the -d "publish" flag enabled and check your logs? This will cause heartbeat to log each event it sends to Elasticsearch with its full JSON. The lines should start with: 2019-02-04T11:00:00.174-0600 DEBUG [publish] pipeline/processor.go:308 Publish event:.

Also, can you share your system info? Operating system and version would be great to know, as well as if any proxies are enabled.

Also, do you get this issue against any public website URLs? I'd love to be able to repro this.

Unfortunately, I cannot access the internet so I cannot test against public urls. The server is a redhat enterprise linux 7 and the event is:

"@timestamp": "2019-02-05T06:05:25.377Z",
"@metadata": {
"beat": "heartbeat",
"type": "doc",
"version": "6.6.0"
"error": {
"type": "validate",
"message": "received status code 401 expecting 200"
"tcp": {
"rtt": {
"connect": {
"us": 378
"port": 9200
"tls": {
"certificate_not_valid_before": "2019-01-22T09:10:13.000Z",
"certificate_not_valid_after": "2020-01-22T09:10:13.000Z",
"rtt": {
"handshake": {
"us": 15199
"http": {
"response": {
"status_code": 401
"rtt": {
"total": {
"us": 125907
"write_request": {
"us": 78
"response_header": {
"us": 110184
"validate": {
"us": 110209
"content": {
"us": 25
"url": "https://host:9200"
"monitor": {
"type": "http",
"host": "host",
"ip": "",
"duration": {
"us": 126934
"status": "down",
"scheme": "https",
"id": "http@https://host:9200",
"name": "http"
"beat": {
"version": "6.6.0",
"name": "host",
"hostname": "host"
"host": {
"name": "host",
"architecture": "x86_64",
"os": {
"codename": "Maipo",
"platform": "rhel",
"version": "7.6 (Maipo)",
"family": "",
"name": "Red Hat Enterprise Linux Server"
"id": "b7ab972bc56541c3aabfab009d16ad0d",
"containerized": true
"resolve": {
"host": "host",
"ip": "",
"rtt": {
"us": 764

That looks like valid JSON for Elasticsearch

So, it would help to clarify a few things:

  1. Are you still receiving those errors?
  2. Are some events being indexed or none?
  3. What is your current mapping if you GET /heartbeat-*/_mappings (you only need to share the most recent index's mapping).
  1. Yes we still receive those errors - but I have disabled all HTTPS endpoints when not testing...
  2. Currently, only non-encrypted endpoints can be indexed.
  3. The mapping is:

Do you have disk spooling turned on? This sounds a lot like

Hello Andrew,

Yes that was the root cause. Thank you!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.