Update timestamp in all documents in an index

we had a backlog of messages in kafa for filebeat coming from syslog with the following format which were from 2018,

Dec 31 23:59:59 phc1803-hdfs-03 consul: 2018/12/31 23:59:59 [WARN] agent: Check 'service:timeseries-hdfs-zookeeper-leader' is now critical

we noticed the issue in 2019 and when these documents ran the timestamp was set to "@timestamp": "2019-12-31T23:59:59.000Z" because the grok filter was taking in MM DD and setting a year based on when it was parsed. Since we noticed the issue in 2019 the year was set incorrectly and we need to reverted for all the docs to 2018.

I noticed there is an update api and a update by query call i can use to modify all the dates in all the docs across the index to change 2019 to 2018 but I'm not seeing clear examples on this. Is this doable and if so can you provide and example on how i can do this across all the docs in an index

i am testing this,

POST filebeat-6.0.0-2018.12.17/_update
{
"script": {
"source": "ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYear(1)",
"lang": "painless"
}
}

but getting this

{
"error": {
"root_cause": [
{
"type": "cluster_block_exception",
"reason": "blocked by: [FORBIDDEN/8/index write (api)];"
}
],
"type": "cluster_block_exception",
"reason": "blocked by: [FORBIDDEN/8/index write (api)];"
},
"status": 403
}

i also gave this syntax a shot but no go,

POST filebeat-6.0.0-2018.12.18/_update_by_query
{
"query": {
"match": {
"field": "@timestamp"
}
},
"script": {
"source": "ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYear(1)"
}
}

i get this

{
"took": 2,
"timed_out": false,
"total": 0,
"updated": 0,
"deleted": 0,
"batches": 0,
"version_conflicts": 0,
"noops": 0,
"retries": {
"bulk": 0,
"search": 0
},
"throttled_millis": 0,
"requests_per_second": -1,
"throttled_until_millis": 0,
"failures":
}

I don't think your query is matching any document. Just run the query as a normal _search.

Hey David,

thanks, I assume you mean this,

POST filebeat-6.0.0-2018.12.17/_update_by_query
"script": {
"source": "ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYears(1)"
}
}

which resulted in this with a few failures below all of them are not listed to keep things short

{
"took": 33,
"timed_out": false,
"total": 82199,
"updated": 0,
"deleted": 0,
"batches": 1,
"version_conflicts": 0,
"noops": 0,
"retries": {
"bulk": 0,
"search": 0
},
"throttled_millis": 0,
"requests_per_second": -1,
"throttled_until_millis": 0,
"failures": [
{
"index": "filebeat-6.0.0-2018.12.17",
"type": "doc",
"id": "6BO6vWcB7w5aSTPpODTn",
"cause": {
"type": "cluster_block_exception",
"reason": "blocked by: [FORBIDDEN/8/index write (api)];"
},
"status": 403
},
{
"index": "filebeat-6.0.0-2018.12.17",
"type": "doc",
"id": "GXW5vWcBbObIoP1k_A1g",
"cause": {
"type": "cluster_block_exception",
"reason": "blocked by: [FORBIDDEN/8/index write (api)];"
},
"status": 403
},

sorry i had the post command in wrong format with my last paste,

command:

POST filebeat-6.0.0-2018.12.17/_update_by_query
{
"script": {
"source": "ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYears(1)"
}
}

result:
{
"error": {
"root_cause": [
{
"type": "script_exception",
"reason": "runtime error",
"script_stack": [
"java.util.Objects.requireNonNull(Objects.java:228)",
"java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1848)",
"java.time.OffsetDateTime.parse(OffsetDateTime.java:402)",
"java.time.OffsetDateTime.parse(OffsetDateTime.java:387)",
"ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYears(1)",
" ^---- HERE"
],
"script": "ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYears(1)",
"lang": "painless"
}
],
"type": "script_exception",
"reason": "runtime error",
"script_stack": [
"java.util.Objects.requireNonNull(Objects.java:228)",
"java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1848)",
"java.time.OffsetDateTime.parse(OffsetDateTime.java:402)",
"java.time.OffsetDateTime.parse(OffsetDateTime.java:387)",
"ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYears(1)",
" ^---- HERE"
],
"script": "ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYears(1)",
"lang": "painless",
"caused_by": {
"type": "null_pointer_exception",
"reason": "text"
}
},
"status": 500
}

No I meant

GET filebeat-6.0.0-2018.12.18/_search
{
"query": {
"match": {
"field": "@timestamp"
}
}

hey david,

i ran what you gave me
GET filebeat-6.0.0-2018.12.18/_search
{
"query": {
"match": {
"field": "@timestamp"
}
}
}

and i got results back

{
"took": 1,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 0,
"max_score": null,
"hits":
}
}

Can you please indicate why we are not getting any hits back for the timestamp field?

Because no document match your request.

if you take a look at one of the docs in that index with a particular ID number you will see @timestamp. if you can point me in the correct direction that would be great, i have done everything possible to research this and can't find much online to modify timestamp correctly.

{
"_index": "filebeat-6.0.0-2018.12.18",
"_type": "doc",
"_id": "5uwmv2cBNopQFAimAb_2",
"_version": 1,
"found": true,
"_source": {
"input": {
"type": "log"
},
"message": "Dec 18 02:29:56 takenout salt-minion: [INFO ] Running scheduled job: __mine_interval",
"fileset": {
"module": "system",
"name": "syslog"
},
"host": {
"name": "TAKENout"
},
"source": "/var/log/messages",
"index_prefix": "filebeat-6.0.0",
"@timestamp": "2018-12-18T02:29:56.000Z",
"offset": 130785,
"system": {
"syslog": {
"message": "[INFO ] Running scheduled job: __mine_interval",
"hostname": "takenout",
"program": "salt-minion",
"timestamp": "Dec 18 02:29:56"
}
},
"@version": "1",
"prospector": {
"type": "log"
},
"beat": {
"hostname": "takenoutfor",
"name": "takenout",
"version": "6.3.0"
}
}
}

The match query here tries to check if a field named field has a value of @timestamp. This is obviously not the case.

I don't know what is your intention with this query.

BTW could you please format your code, logs or configuration files using </> icon as explained in this guide and not the citation button. It will make your post more readable.

Or use markdown style like:

```
CODE
```

This is the icon to use if you are not using markdown format:

There's a live preview panel for exactly this reasons.

Lots of people read these forums, and many of them will simply skip over a post that is difficult to read, because it's just too large an investment of their time to try and follow a wall of badly formatted text.
If your goal is to get an answer to your questions, it's in your interest to make it as easy to read and understand as possible.
Please update your post.

Is this a proper command to update all timestamps in all docs in an index or will the below not work and i have to use query by? The reason i ask is because i get the error below the command when i run with the update command with the following syntax.

POST filebeat-6.0.0-2018.12.17/_update
{
"script": {
"source": "ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYear(1)",
"lang": "painless"
}
}

ERROR IS:

{
"error": {
"root_cause": [
{
"type": "cluster_block_exception",
"reason": "blocked by: [FORBIDDEN/8/index write (api)];"
}
],
"type": "cluster_block_exception",
"reason": "blocked by: [FORBIDDEN/8/index write (api)];"
},
"status": 403
}

Could you tell what is the output of:

GET /_cat/plugins?v
GET /
name    component    version
2sxScBp ingest-geoip 6.3.0
ShECWbX ingest-geoip 6.3.0
SvZv1kU ingest-geoip 6.3.0
{
  "name": "2sxScBp",
  "cluster_name": "elasticsearch",
  "cluster_uuid": "9ua9HU9SRva7p6VqHiBfpw",
  "version": {
    "number": "6.3.0",
    "build_flavor": "default",
    "build_type": "rpm",
    "build_hash": "424e937",
    "build_date": "2018-06-11T23:38:03.357887Z",
    "build_snapshot": false,
    "lucene_version": "7.3.1",
    "minimum_wire_compatibility_version": "5.6.0",
    "minimum_index_compatibility_version": "5.0.0"
  },
  "tagline": "You Know, for Search"
}

And:

GET /filebeat-6.0.0-2018.12.17/_settings
{
  "filebeat-6.0.0-2018.12.17": {
    "settings": {
      "index": {
        "routing": {
          "allocation": {
            "require": {
              "_name": "lxc-elastic-01",
              "_ip": "172.16.99.212"
            }
          }
        },
        "mapping": {
          "total_fields": {
            "limit": "10000"
          }
        },
        "refresh_interval": "5s",
        "number_of_shards": "5",
        "blocks": {
          "write": "true"
        },
        "provided_name": "filebeat-6.0.0-2018.12.17",
        "creation_date": "1545076325812",
        "number_of_replicas": "1",
        "uuid": "ZB2RB6VqRqavxNYkp8uZjw",
        "version": {
          "created": "6030099"
        }
      }
    }
  }
}

Your index is blocked for writing. You need to change that setting.

i changed it to false by doing this

PUT filebeat-6.0.0-2018.12.17/_settings
{
  "index": {
    "blocks.write": false
  }
}

then i got this when i ran the command again

{
  "error": {
    "root_cause": [
      {
        "type": "invalid_type_name_exception",
        "reason": "Document mapping type name can't start with '_', found: [_update]"
      }
    ],
    "type": "invalid_type_name_exception",
    "reason": "Document mapping type name can't start with '_', found: [_update]"
  },
  "status": 400
}

looks like update might be a single document api and for muti document you have to use update_query_by, is this correct, according to this doc,
https://www.elastic.co/guide/en/elasticsearch/reference/6.3/docs.html

if so i ran this now

POST filebeat-6.0.0-2018.12.17/_update_by_query
{
   "query": { 
    "match_all": {}
    }, 
"script": {
"source": "ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYears(1)",
"lang": "painless"
}
}

and got this

{
  "error": {
    "root_cause": [
      {
        "type": "script_exception",
        "reason": "runtime error",
        "script_stack": [
          "java.util.Objects.requireNonNull(Objects.java:228)",
          "java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1848)",
          "java.time.OffsetDateTime.parse(OffsetDateTime.java:402)",
          "java.time.OffsetDateTime.parse(OffsetDateTime.java:387)",
          "ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYears(1)",
          "                                                        ^---- HERE"
        ],
        "script": "ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYears(1)",
        "lang": "painless"
      }
    ],
    "type": "script_exception",
    "reason": "runtime error",
    "script_stack": [
      "java.util.Objects.requireNonNull(Objects.java:228)",
      "java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1848)",
      "java.time.OffsetDateTime.parse(OffsetDateTime.java:402)",
      "java.time.OffsetDateTime.parse(OffsetDateTime.java:387)",
      "ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYears(1)",
      "                                                        ^---- HERE"
    ],
    "script": "ctx._source.timestamp = OffsetDateTime.parse(ctx._source.timestamp).plusYears(1)",
    "lang": "painless",
    "caused_by": {
      "type": "null_pointer_exception",
      "reason": "text"
    }
  },
  "status": 500
}

looks like I'm back to square one were the query by is getting a null for the results but when i do a _search query on that i get hits?

{
  "took": 3,
  "timed_out": false,
  "_shards": {
    "total": 5,
    "successful": 5,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": 82199,
    "max_score": 1,
    "hits": [
      {