Add the current date in mapping elasticsearch

Hi,
I want to add - as a new field - the current date (now) in my index thus, i'll keep the elasticsearch indexing time in my index.
How can I do it in the mapping settings ?
I saw that the '_timestamp' is deprecated...

something like : 'indexing_ts' = now() ...

ty for your answer

Hi @ericpietro,

you cannot define this in the mapping directly. Rather you should define indexing_ts as a normal date field in the mapping and set indexing_ts explicitly to the current timestamp in your client application for each document.

Daniel

ty danielmitterdorfer :slight_smile:
but can you give me the syntax ?
I have to declare in the mapping a new field (that doesn't exist in my JSON message 'incoming'), like this ?
"properties": {
"my field x": {...},
"my field y" :{...},
...
"indexing_ts": { "type": "date", "format": "strict_date_optional_time||epoch_millis"}
}
??
Is it enough ? How can it get the current timestamp (now) ?

Hi @ericpietro,

Here is a complete example. I cannot tell you which date format you have to use. That depends on your application and your requirements. I have chosen epoch_millis for my example.

PUT /logs
{
   "mappings": {
      "incoming": {
         "properties": {
            "timestamp": {
               "type": "date",
               "format": "epoch_millis"
            },
            "message": {
               "type": "string",
               "index": "not_analyzed"
            }
         }
      }
   }
}

You can now add a log record like this:

POST /logs/incoming/1
{
  "timestamp": 1469025859000,
  "message": "Hi there"
}

How can it get the current timestamp (now) ?

As you can see in the example above your application / the client has to provide it. Elasticsearch does not set this value anymore.

Daniel

ty again daniel... but it doen't help very much.
by goal is to date the instant when elasticsearch index my doc (log line)...
in my global solution, i have, for every event :
harvest-ts (when i catch the log line)
shipping-ts (when i transport it to kafka by logstash)
consuming-ts (when i consume log line from kafka and parse it)
indexing-ts (when i insert the log line in elasticsearch cluster. this is missing !!!)
thus, i can calculate the lag between two steps or stages...
so sorry that _timestamp be deprecated ... :unamused:

Hi @ericpietro,

so, which application puts your log lines into Elasticsearch? Do you use Logstash or is this a homegrown application? In the former case: Can't you use add a mutate filter to add indexing_ts? In the latter case: Can't the application add the field indexing_ts?

Daniel

hi daniel
thanks very much to answer :slight_smile:
i use logstash and i already 'time' the treatment of my log line by a consuming-ts.
that is ok.
but i send each log line to elasticsearch using the output plugin and i would know the lag between I consume the log line by logtash and elasticsearch index the log line in the nosql database ... (_timestamp used before)...
have a nice day :slight_smile:

Hi Eric,

ah, now I got you. If you want this, then I guess you could use the Ingest Node API. This runs within Elasticsearch so you get a chance to measure the time lag you're after.

Daniel