@timestamp picked as string instead of date


(Giulio) #1

Hi all,

I'm using Elasticsearch 6.0.0 and Filebeat 6.0.0 to forward and analyze some JMeter CSV test files.

I configured Filebeat to send some CSV data to an Elasticsearch instance through a pipeline, defined this way

PUT _ingest/pipeline/parse_test_csv
{
	"processors": [{
			"grok": {
				"field": "message",
				"patterns": ["%{INT:time},%{INT:elapsed},%{GREEDYDATA:label},%{INT:responseCode},%{DATA:responseMessage},%{GREEDYDATA:threadName},%{DATA:dataType},%{DATA:success},%{INT:bytes},%{INT:grpThreads},%{INT:allThreads},%{INT:latency}"]
			}
		}, {
			"date": {
				"field": "time",
				"formats": ["UNIX_MS"]
			}
		}, {
			"remove": {
				"field": ["threadName", "dataType", "bytes", "grpThreads", "allThreads"]
			}
		}, {
			"convert": {
				"field": "time",
				"type": "auto"
			}
		}, {
			"convert": {
				"field": "elapsed",
				"type": "auto"
			}
		}, {
			"convert": {
				"field": "label",
				"type": "auto"
			}
		}, {
			"convert": {
				"field": "responseCode",
				"type": "auto"
			}
		}, {
			"convert": {
				"field": "responseMessage",
				"type": "auto"
			}
		}, {
			"convert": {
				"field": "success",
				"type": "auto"
			}
		}, {
			"convert": {
				"field": "latency",
				"type": "auto"
			}
		}
	],
	"on_failure": [{
			"set": {
				"field": "error",
				"value": " - Error processing message - "
			}
		}
	]
}

As you can see the pipeline works with data formatted this way

1511260490262,7241,A title,1000,Test successful,Tests 1-1,text,true,0,1,1,0

The filebeat configuration regarding the elasticsearch output is simply this

output.elasticsearch:
  hosts: ["<host>:<port>"]
  pipeline: "parse_test_csv"

Now I would like to use @timestamp (which has the date time converted value of the time field as per the pipeline defined above) for the index pattern in Kibana. So I create a index pattern for filebeat*, but I am not able to select the @timestamp field as the time field for the pattern. It is recognized as string

Can you help me finding what is wrong with my configuration? Where/how should I define the @timestamp field as a date in order to let me select it as the time field of the index pattern?

Thank you
Giulio


(Giulio) #2

I add here the topic's settings, with the dynamic mapping generated when the index was automatically created by ES

{
	"filebeat-6.0.0-2017.11.23": {
		"aliases": {},
		"mappings": {
			"doc": {
				"_meta": {
					"version": "6.0.0"
				},
				"dynamic_templates": [{
						"strings_as_keyword": {
							"match_mapping_type": "string",
							"mapping": {
								"ignore_above": 1024,
								"type": "keyword"
							}
						}
					}
				],
				"date_detection": false,
				"properties": {
					"@timestamp": {
						"type": "keyword",
						"ignore_above": 1024
					},
					"beat": {
						"properties": {
							"hostname": {
								"type": "keyword",
								"ignore_above": 1024
							},
							"name": {
								"type": "keyword",
								"ignore_above": 1024
							},
							"version": {
								"type": "keyword",
								"ignore_above": 1024
							}
						}
					},
					"elapsed": {
						"type": "long"
					},
					"fields": {
						"properties": {
							"app_name": {
								"type": "keyword",
								"ignore_above": 1024
							}
						}
					},
					"label": {
						"type": "keyword",
						"ignore_above": 1024
					},
					"latency": {
						"type": "long"
					},
					"message": {
						"type": "keyword",
						"ignore_above": 1024
					},
					"offset": {
						"type": "long"
					},
					"prospector": {
						"properties": {
							"type": {
								"type": "keyword",
								"ignore_above": 1024
							}
						}
					},
					"responseCode": {
						"type": "long"
					},
					"responseMessage": {
						"type": "keyword",
						"ignore_above": 1024
					},
					"source": {
						"type": "keyword",
						"ignore_above": 1024
					},
					"success": {
						"type": "boolean"
					},
					"time": {
						"type": "float"
					}
				}
			}
		},
		"settings": {
			"index": {
				"mapping": {
					"total_fields": {
						"limit": "10000"
					}
				},
				"refresh_interval": "5s",
				"number_of_shards": "1",
				"provided_name": "filebeat-6.0.0-2017.11.23",
				"creation_date": "1511439697431",
				"number_of_replicas": "1",
				"uuid": "JLvQ6CuoQyivdiuq53XcUA",
				"version": {
					"created": "6000099"
				}
			}
		}
	}
}

As you see the @timestamp field has the keyword data type assigned by ES, not the date type. So something went wrong in ES when it created the index dynamic type (or not?)

"@timestamp": {
	"type": "keyword",
	"ignore_above": 1024
}

(Giulio) #3

Please, I'm stuck with this problem. Does anybody know what the problem could be?


(David Pilato) #4

Please read

Specifically the "be patient" part.


(Giulio) #5

Sorry, is not my intention to hurry you.


(Giulio) #6

An update.

I tried disabling the pipeline at all, so the CSV document is sent to ES directly.

Again, the index gets created with the @timestamp field as keyword, so it seems that Filebeat creates the index in ES with the @timestamp field with type keyword, but having a regulard date time value in it like 2017-11-24T15:51:19.913Z.

It's so strange because the internal @timestamp field is created by Filebeat/ES and it should be managed automatically with the date type (as stated here)

What do you think?


(David Pilato) #7

When filebeat starts, I believe that it creates an index template that you might need to adapt.

What is the current templates you have? GET _template


(Giulio) #8

I found the issue and fixed it.

"Simply" I disabled the default templating of Filebeat (the ones written in fields.xml) so Filebeat was creating the fields without a type mapping.
In fact, in my trials I set the setup.template.fields property to a empty file. Now I set it to a custom file having the beat mapping section only and everything is working as expected.

Sorry for having created a thread for an issue which usually should not exist, but I hope this will be useful in the future for others.


(system) closed #9

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.