Hi all,
I'm using Elasticsearch 6.0.0 and Filebeat 6.0.0 to forward and analyze some JMeter CSV test files.
I configured Filebeat to send some CSV data to an Elasticsearch instance through a pipeline, defined this way
PUT _ingest/pipeline/parse_test_csv
{
"processors": [{
"grok": {
"field": "message",
"patterns": ["%{INT:time},%{INT:elapsed},%{GREEDYDATA:label},%{INT:responseCode},%{DATA:responseMessage},%{GREEDYDATA:threadName},%{DATA:dataType},%{DATA:success},%{INT:bytes},%{INT:grpThreads},%{INT:allThreads},%{INT:latency}"]
}
}, {
"date": {
"field": "time",
"formats": ["UNIX_MS"]
}
}, {
"remove": {
"field": ["threadName", "dataType", "bytes", "grpThreads", "allThreads"]
}
}, {
"convert": {
"field": "time",
"type": "auto"
}
}, {
"convert": {
"field": "elapsed",
"type": "auto"
}
}, {
"convert": {
"field": "label",
"type": "auto"
}
}, {
"convert": {
"field": "responseCode",
"type": "auto"
}
}, {
"convert": {
"field": "responseMessage",
"type": "auto"
}
}, {
"convert": {
"field": "success",
"type": "auto"
}
}, {
"convert": {
"field": "latency",
"type": "auto"
}
}
],
"on_failure": [{
"set": {
"field": "error",
"value": " - Error processing message - "
}
}
]
}
As you can see the pipeline works with data formatted this way
1511260490262,7241,A title,1000,Test successful,Tests 1-1,text,true,0,1,1,0
The filebeat configuration regarding the elasticsearch output is simply this
output.elasticsearch:
hosts: ["<host>:<port>"]
pipeline: "parse_test_csv"
Now I would like to use @timestamp
(which has the date time converted value of the time
field as per the pipeline defined above) for the index pattern in Kibana. So I create a index pattern for filebeat*
, but I am not able to select the @timestamp
field as the time field for the pattern. It is recognized as string
Can you help me finding what is wrong with my configuration? Where/how should I define the @timestamp
field as a date in order to let me select it as the time field of the index pattern?
Thank you
Giulio