Update timestamp in all documents in an index

No I meant

GET filebeat-6.0.0-2018.12.18/_search
{
"query": {
"match": {
"field": "@timestamp"
}
}

hey david,

i ran what you gave me
GET filebeat-6.0.0-2018.12.18/_search
{
"query": {
"match": {
"field": "@timestamp"
}
}
}

and i got results back

{
"took": 1,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 0,
"max_score": null,
"hits":
}
}

Can you please indicate why we are not getting any hits back for the timestamp field?

Because no document match your request.

if you take a look at one of the docs in that index with a particular ID number you will see @timestamp. if you can point me in the correct direction that would be great, i have done everything possible to research this and can't find much online to modify timestamp correctly.

{
"_index": "filebeat-6.0.0-2018.12.18",
"_type": "doc",
"_id": "5uwmv2cBNopQFAimAb_2",
"_version": 1,
"found": true,
"_source": {
"input": {
"type": "log"
},
"message": "Dec 18 02:29:56 takenout salt-minion: [INFO ] Running scheduled job: __mine_interval",
"fileset": {
"module": "system",
"name": "syslog"
},
"host": {
"name": "TAKENout"
},
"source": "/var/log/messages",
"index_prefix": "filebeat-6.0.0",
"@timestamp": "2018-12-18T02:29:56.000Z",
"offset": 130785,
"system": {
"syslog": {
"message": "[INFO ] Running scheduled job: __mine_interval",
"hostname": "takenout",
"program": "salt-minion",
"timestamp": "Dec 18 02:29:56"
}
},
"@version": "1",
"prospector": {
"type": "log"
},
"beat": {
"hostname": "takenoutfor",
"name": "takenout",
"version": "6.3.0"
}
}
}

The match query here tries to check if a field named field has a value of @timestamp. This is obviously not the case.

I don't know what is your intention with this query.

BTW could you please format your code, logs or configuration files using </> icon as explained in this guide and not the citation button. It will make your post more readable.

Or use markdown style like:

```
CODE
```

This is the icon to use if you are not using markdown format:

There's a live preview panel for exactly this reasons.

Lots of people read these forums, and many of them will simply skip over a post that is difficult to read, because it's just too large an investment of their time to try and follow a wall of badly formatted text.
If your goal is to get an answer to your questions, it's in your interest to make it as easy to read and understand as possible.
Please update your post.

Is this a proper command to update all timestamps in all docs in an index or will the below not work and i have to use query by? The reason i ask is because i get the error below the command when i run with the update command with the following syntax.

POST filebeat-6.0.0-2018.12.17/_update
{
"script": {
"source": "ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYear(1)",
"lang": "painless"
}
}

ERROR IS:

{
"error": {
"root_cause": [
{
"type": "cluster_block_exception",
"reason": "blocked by: [FORBIDDEN/8/index write (api)];"
}
],
"type": "cluster_block_exception",
"reason": "blocked by: [FORBIDDEN/8/index write (api)];"
},
"status": 403
}

Could you tell what is the output of:

GET /_cat/plugins?v
GET /
name    component    version
2sxScBp ingest-geoip 6.3.0
ShECWbX ingest-geoip 6.3.0
SvZv1kU ingest-geoip 6.3.0
{
  "name": "2sxScBp",
  "cluster_name": "elasticsearch",
  "cluster_uuid": "9ua9HU9SRva7p6VqHiBfpw",
  "version": {
    "number": "6.3.0",
    "build_flavor": "default",
    "build_type": "rpm",
    "build_hash": "424e937",
    "build_date": "2018-06-11T23:38:03.357887Z",
    "build_snapshot": false,
    "lucene_version": "7.3.1",
    "minimum_wire_compatibility_version": "5.6.0",
    "minimum_index_compatibility_version": "5.0.0"
  },
  "tagline": "You Know, for Search"
}

And:

GET /filebeat-6.0.0-2018.12.17/_settings
{
  "filebeat-6.0.0-2018.12.17": {
    "settings": {
      "index": {
        "routing": {
          "allocation": {
            "require": {
              "_name": "lxc-elastic-01",
              "_ip": "172.16.99.212"
            }
          }
        },
        "mapping": {
          "total_fields": {
            "limit": "10000"
          }
        },
        "refresh_interval": "5s",
        "number_of_shards": "5",
        "blocks": {
          "write": "true"
        },
        "provided_name": "filebeat-6.0.0-2018.12.17",
        "creation_date": "1545076325812",
        "number_of_replicas": "1",
        "uuid": "ZB2RB6VqRqavxNYkp8uZjw",
        "version": {
          "created": "6030099"
        }
      }
    }
  }
}

Your index is blocked for writing. You need to change that setting.

i changed it to false by doing this

PUT filebeat-6.0.0-2018.12.17/_settings
{
  "index": {
    "blocks.write": false
  }
}

then i got this when i ran the command again

{
  "error": {
    "root_cause": [
      {
        "type": "invalid_type_name_exception",
        "reason": "Document mapping type name can't start with '_', found: [_update]"
      }
    ],
    "type": "invalid_type_name_exception",
    "reason": "Document mapping type name can't start with '_', found: [_update]"
  },
  "status": 400
}

looks like update might be a single document api and for muti document you have to use update_query_by, is this correct, according to this doc,
https://www.elastic.co/guide/en/elasticsearch/reference/6.3/docs.html

if so i ran this now

POST filebeat-6.0.0-2018.12.17/_update_by_query
{
   "query": { 
    "match_all": {}
    }, 
"script": {
"source": "ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYears(1)",
"lang": "painless"
}
}

and got this

{
  "error": {
    "root_cause": [
      {
        "type": "script_exception",
        "reason": "runtime error",
        "script_stack": [
          "java.util.Objects.requireNonNull(Objects.java:228)",
          "java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1848)",
          "java.time.OffsetDateTime.parse(OffsetDateTime.java:402)",
          "java.time.OffsetDateTime.parse(OffsetDateTime.java:387)",
          "ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYears(1)",
          "                                                        ^---- HERE"
        ],
        "script": "ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYears(1)",
        "lang": "painless"
      }
    ],
    "type": "script_exception",
    "reason": "runtime error",
    "script_stack": [
      "java.util.Objects.requireNonNull(Objects.java:228)",
      "java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1848)",
      "java.time.OffsetDateTime.parse(OffsetDateTime.java:402)",
      "java.time.OffsetDateTime.parse(OffsetDateTime.java:387)",
      "ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYears(1)",
      "                                                        ^---- HERE"
    ],
    "script": "ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYears(1)",
    "lang": "painless",
    "caused_by": {
      "type": "null_pointer_exception",
      "reason": "text"
    }
  },
  "status": 500
}

looks like I'm back to square one were the query by is getting a null for the results but when i do a _search query on that i get hits?

{
  "took": 3,
  "timed_out": false,
  "_shards": {
    "total": 5,
    "successful": 5,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": 82199,
    "max_score": 1,
    "hits": [
      {

The doc says that to access the _source you need to write:

ctx['_source']

More at https://www.elastic.co/guide/en/elasticsearch/painless/6.6/painless-update-by-query-context.html

That's may be your problem?

hello David,

I'm using version 6.3 and according to this doc it seems like I have the correct syntax but I will try with what you indicated and post my findings,

https://www.elastic.co/guide/en/elasticsearch/painless/6.3/painless-examples.html

thanks

hello david,

i just tried this

POST filebeat-6.0.0-2018.12.18/_update_by_query
{
    "query": {
    "match_all": {}
},
"script": {
"source": "ctx._source['@timestamp'] = OffsetDateTime.parse(ctx._source['@timestamp']).plusYears(1)"
}
}

and got this

{
  "error": {
    "root_cause": [
      {
        "type": "illegal_argument_exception",
        "reason": "cannot write xcontent for unknown value of type class java.time.OffsetDateTime"
      }
    ],
    "type": "illegal_argument_exception",
    "reason": "cannot write xcontent for unknown value of type class java.time.OffsetDateTime"
  },
  "status": 400
}

I have no clue on what the syntax is here since the docs point me in all different directions, please let me know what I'm missing here.

thanks

hey david,

i got it to work by using the tostring method,

POST filebeat-6.0.0-2018.12.18/_update_by_query
{
    "query": {
    "match_all": {}
},
"script": {
"source": "ctx._source['@timestamp'] = OffsetDateTime.parse(ctx._source['@timestamp']).plusYears(1).toString()"
}
}

how do i get my new data with the new timestamp to show up under cat indices. For example i modified the filebeat-6.0.0-2018.12.18, 2018.12.17 and 2018.11.17 and added 1 year to them. I am assuming i should see 3 new indicies with the new year but I am only seeing one, 2019.12.18. Under kibana discover tab the documents show up for under the current month with the same document count as last year but when i do _cat/indices the count is not correct.

green open filebeat-6.0.0-2018.12.17 ZB2RB6VqRqavxNYkp8uZjw 5 1  82199  3323  42.7mb  21.7mb
green open filebeat-6.0.0-2019.02.19 Etq3ubN_R5KIl53Wi_FP6g 5 1  62364     0  38.7mb  19.4mb
green open filebeat-6.0.0-2018.12.18 x7476yM1Tqqd7iITM0lH9g 5 1  35396     0  18.5mb   9.2mb
green open filebeat-6.0.0-2019.02.10 L4mQJzCeRjuTLZPWB0PKdg 5 1 144127     0  77.7mb  38.8mb
green open filebeat-6.0.0-2019.02.12 B712uGiIRjmrEeNyl5F7rA 5 1  69121     0  41.4mb  20.7mb
green open filebeat-6.0.0-2019.02.11 mICGDhjFRRqCdHIqt2PrLQ 5 1 190740     0 141.7mb  70.9mb
green open filebeat-6.0.0-2018.11.16 SpqmHavsS2mMeNR0loY4Kw 5 1 756767 78627 792.3mb 400.2mb
green open filebeat-6.0.0-2018.11.15 wf8vC6nOTn-QHi-eijOr8Q 5 1 878074     0 424.3mb 210.9mb
green open filebeat-6.0.0-2019.12.18 aX1w-bAPTJ6FvI4jYhu2GQ 5 1   5013     0   3.4mb   1.7mb
green open filebeat-6.0.0-2019.03.04 uG5mp1RKQIWhK548rodM2w 5 1  63362     0  59.3mb  31.5mb

noticed it created the 2019.12.18 index automatically but the other two are missing.

You need to reindex them in the new indices. So you need to change _index metadata basically.
So it's not a update by query that you need to run but a reindex.