Change TTL value

Hi all,

I'm analyzing how can I use ttl in my system to implement document
aging.
I've already configured indices.ttl.interval and indices.ttl.bulk_size
properties, and created the specific mapping for my index in:

config/mappings/my_index/my_type.json, having:

{
"my_type":{
"_ttl":{
"enabled":true,
"default":"7d"
}
}
}

I have now some questions:
1 - where can I see the possible time values? Can I use s for seconds
and m for minutes? d actually stands for days!
2 - this ttl will be coupled to each document, or if I change it, all
documents will reflect the change? And is there any other way to
change it besides the mapping update?
3 - why is my bulk_size property not working (is not being correctly
used to delete documents - is set to 100, but clears all my
documents)?
4 - to set the interval of tll process, can I use an regular
expression (like cron) or I'm limited to use time values?

Thanks in advance!

Hey,

  1. I am not sure if it is documented. You can use w for week, d for day, H
    for hours, m for minutes, s for secs, ms for msecs, etc. Maybe have a look
    here for more information:
    https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/common/unit/TimeValue.java

  2. the TTL defined in mapping will be applied to all indexed docs of type
    "my_type" if you don't specify explicitly a TTL for each doc you index.

  3. it is normal that all your expired docs are purged no matter what is
    your bulk_size value. The bulk_size value just allow to configure how much
    docs are expired at once by the purge process.

  4. no

1 Like

Hi Benjamin, thanks for your reply!

I deduce from your answer to 2), that a document once indexed wil a
ttl value, even if tll value is changed (by updating mapping), no
changes will be made to documents already indexed (unless we reindex
all documents), correct?

And, in 3), if I have 1000 documents, and have a bulk_size of 100 (and
for instance, ttl is running every minute and ttl value is set to 5m),
shouldn't I see deleted only 100 documents at a time? The problem is
that, is the situation above, all documents are deleted!

Thanks!

On Jan 10, 10:25 pm, Benjamin Devèze benjamin.dev...@gmail.com
wrote:

Hey,

  1. I am not sure if it is documented. You can use w for week, d for day, H
    for hours, m for minutes, s for secs, ms for msecs, etc. Maybe have a look
    here for more information:https://github.com/elasticsearch/elasticsearch/blob/master/src/main/j...

  2. the TTL defined in mapping will be applied to all indexed docs of type
    "my_type" if you don't specify explicitly a TTL for each doc you index.

  3. it is normal that all your expired docs are purged no matter what is
    your bulk_size value. The bulk_size value just allow to configure how much
    docs are expired at once by the purge process.

  4. no

  • changing the default TTL value in the mapping won't affect the TTL of
    already indexed docs. If you want to update the TTL of an already indexed
    document you can use the update API that has been recently added and doing
    in your script something like this: ctx._ttl = "2d" (this is only available
    in master branch for now and will be in next released version)

  • Nope. As it is currently implemented the purger is called, it collects
    the 1000 expired documents, then executes 10 bulk requests of 100 delete
    requests. So all expired documents are deleted.

Ok, got it!

Thanks for your help!

On Jan 11, 10:54 am, Benjamin Devèze benjamin.dev...@gmail.com
wrote:

  • changing the default TTL value in the mapping won't affect the TTL of
    already indexed docs. If you want to update the TTL of an already indexed
    document you can use the update API that has been recently added and doing
    in your script something like this: ctx._ttl = "2d" (this is only available
    in master branch for now and will be in next released version)

  • Nope. As it is currently implemented the purger is called, it collects
    the 1000 expired documents, then executes 10 bulk requests of 100 delete
    requests. So all expired documents are deleted.

Sorry to jump in on this - Benjamin, you say "changing the default TTL
in the mapping", and I've been trying to do exactly that without much
success on 0.18.6. Whatever I first use as the mapping seems to stick.
There's been some discussion of what can be updated for mappings but I
couldn't find any solid information. Is it possible to update the
default ttl?

For example:

$ curl -XPUT 'http://localhost:9200/myindex/mytype/_mapping' -d
'{"mytype": {"_timestamp": {"enabled": false}, "_ttl":
{"enabled":false, "default": "10d"}}}
{"ok":true,"acknowledged":true}
$ curl http://localhost:9200/myindex/mytype/_mapping
{"mytype":{"_timestamp":{"enabled":true},"_ttl":
{"enabled":true,"default":86400000},"properties":{}}}
$ curl -XPUT 'http://localhost:9200/myindex/mytype/_mapping' -d
'{"mytype": {"_timestamp": {"enabled": false}, "_ttl":
{"enabled":false}}}'
{"ok":true,"acknowledged":true}
$ curl http://localhost:9200/myindex/mytype/_mapping
{"mytype":{"_timestamp":{"enabled":true},"_ttl":
{"enabled":true,"default":86400000},"properties":{}}}

The server debug log contains:
[2012-01-12 20:10:39,612][DEBUG][cluster.service ] [White
Rabbit] processing [put-mapping [sweeper]]: execute
[2012-01-12 20:10:39,627][DEBUG][cluster.service ] [White
Rabbit] processing [put-mapping [sweeper]]: no change in cluster_state

Thanks!

On Jan 11, 5:29 am, vannya vann...@gmail.com wrote:

  • changing the default TTL value in themappingwon't affect the TTL of
    already indexed docs. If you want toupdatethe TTL of an already indexed
    document you can use theupdateAPI that has been recently added and doing
    in your script something like this: ctx._ttl = "2d" (this is only available
    in master branch for now and will be in next released version)
  • Nope. As it is currently implemented the purger is called, it collects
    the 1000 expired documents, then executes 10 bulk requests of 100 delete
    requests. So all expired documents are deleted.

Hey Steve,

it seems it has to deal with the merge process in the mapping. Just open an
issue and it will be supported soon.

Thanks