I'm working on a project that has a few different components that both write documents to the same indices. One of these components is Arkime, which writes its documents' @timestamp
date
as UNIX milliseconds value. This results in documents with a @timestamp
field that looks like this:
{
"filter": {
"_id": "231024-SQXv7KdQjhJLAIjUhQxu7cPB"
},
"range": [
0,
1698170382
],
"results": [
{
"_id": "231024-SQXv7KdQjhJLAIjUhQxu7cPB",
"_index": "arkime_sessions3-231024",
"_score": 0,
"_source": {
"@timestamp": 1698170342616,
"client": {
"bytes": 0
},
...
And I can verify the mapping in the index is of type date
:
{
"arkime_sessions3-231024" : {
"mappings" : {
"@timestamp" : {
"full_name" : "@timestamp",
"mapping" : {
"@timestamp" : {
"type" : "date"
}
}
}
}
}
}
Another component of my project generate logs that go through LogStash for some enrichment. I'm using the date
filter to populate the @timestamp
field from another field's value. My project intends to support both Elasticsearch and OpenSearch backends. When using OpenSearch as a backend, I have no problem. The document is indexed, and the resulting document looks like this:
{
"filter": {
"_id": "231024-aHeDKjnEq5y9nn9Hj-e-EQ"
},
"range": [
0,
1698170294
],
"results": [
{
"_id": "231024-aHeDKjnEq5y9nn9Hj-e-EQ",
"_index": "arkime_sessions3-231024",
"_score": 0,
"_source": {
"@timestamp": "2023-10-24T17:54:26.168Z",
From what I understand the formatting of the @timestamp
field is because LogStash requires that field to be of type LogStash::TimeStamp
, which is formatted according to ISO8601.
However, when I attempt to use Elasticsearch as the backend, indexing any document with LogStash results in an error that looks like this:
...
"status": 400,
"error": {
"type": "document_parsing_exception",
"reason": "[1:390] failed to parse field [@timestamp] of type [long] in document with id '231024--qTiw48vzayoA1Jvertxdg'. Preview of field's value: '2023-10-24T19:54:50.495Z'",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "For input string: \"2023-10-24T19:54:50.495Z\""
}
}
...
I've tried converting the @timestamp
field to a long
value, setting it with a mutate
filter, and doing it directly in embedded ruby, but all of those just result in errors in LogStash saying that it expects @timestamp
to be of type LogStash::TimeStamp
.
So my questions boil down to this:
- Is there some way for me to force LogStash to allow my documents'
@timestamp
field to be a UNIX MS time? Otherdate
fields besides@timestamp
work fine. - If not, is there a way for me to make Elasticsearch allow both the UNIX MS and ISO8601 formats? Honestly I don't see where the
long
definition for@timestamp
is coming from in the first place. - I know this is likely out of scope for this forum, but why does OpenSearch seem fine with the situation while Elasticsearch rejects it?
Warmly,
-SG