Restore data from 7.12 to 7.10.2

Hi

I am using Graylog with Elasticsearch installed on the same server. Unfortunately I ran an apt upgrade which updated Elasticsearch to 7.12 before I realised that Graylog is currently not compatible with this version. The recommendation was to install Elasticsearch on a new server. This was ok as I needed to migrate from a VM to a physical server anyway.

I now have Elasticsearch 7.10.2 installed on the physical server and I've used Elasticdump to export all the indices to disk and copied the files to the new server. This is where I come up with two issues.

The first issue is with date formats. Below is the error:

Wed,05 May 2021 10: 29: 41 GMT | Error Emitted => {
"root_cause": [
    {
        "type": "illegal_argument_exception",
        "reason": "Mapper for [gl2_receive_timestamp] conflicts with existing mapper:\n\tCannot update parameter [format] from [yyyy-MM-dd HH:mm:ss.SSS] to [uuuu-MM-dd HH:mm:ss.SSS]"
    }
],
"type": "illegal_argument_exception",
"reason": "Mapper for [gl2_receive_timestamp] conflicts with existing mapper:\n\tCannot update parameter [format] from [yyyy-MM-dd HH:mm:ss.SSS] to [uuuu-MM-dd HH:mm:ss.SSS]"
}

I worked around this by replacying yyyy with uuuu in all the files. (Please correct me if this was the wrong thing to do).

This brings me on to the next issue which I cannot workout. Importing data comes up with the following error and nothing gets imported.

{ _index: 'integration__13',
  _type: '_doc',
  _id: '2f5229b1-4de7-11eb-a4b2-00155d1e2507',
  status: 403,
  error:
   { type: 'cluster_block_exception',
     reason:
      'index [integration__13] blocked by: [FORBIDDEN/8/index write (api)];' } }
{ _index: 'integration__13',
  _type: '_doc',
  _id: '173474f1-4de7-11eb-a4b2-00155d1e2507',
  status: 403,
  error:
   { type: 'cluster_block_exception',
     reason:
      'index [integration__13] blocked by: [FORBIDDEN/8/index write (api)];' } }

I've tried searching and all I can find is that this is related to disk space. But I can't see how this could be an issue.

curl -XGET "http://localhost:9200/_cat/allocation?v&pretty"
shards disk.indices disk.used disk.avail disk.total disk.percent host          ip            node
     0           0b   286.5gb        5tb      5.3tb            5 <redacted> <redacted> <redacted>

I have been clearing the indices with this command before retrying the imports:

curl -X DELETE 'http://localhost:9200/_all' && curl -X GET 'http://localhost:9200/_refresh' && curl -X GET 'http://localhost:9200/_cat/indices?v'

I have also set:

-Xms9g
-Xmx9g

As the server has 32GB of RAM.

I have also tried:

  • Unfreeze curl -X POST "localhost:9200/integration__13/_unfreeze?pretty"
  • Change watermark with:
curl -X PUT "http://127.0.0.1:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d' { "transient": { "cluster.routing.allocation.disk.watermark.low": "50gb", "cluster.routing.allocation.disk.watermark.high": "20gb", "cluster.routing.allocation.disk.watermark.flood_stage": "10gb", "cluster.info.update.interval": "1m"}}'
  • Force merge:
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
curl -X POST "localhost:9200/integration__13/_forcemerge?pretty"

I've probably tried a few other things too but they haven't worked and I wonder if anyone could please help so I can get the data restored? Thanks.

Well sod's law. After posting all of that, I looked in the settings.json that was exported by elasticdump and found

                "blocks": {
                    "write": "true",

Changed it to false and now all the data gets imported.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.