Upgrade issue with path.data

I've been given a go ahead to get some training which is great but first I need to get the cluster up to the version the training it all for.

I've had some issues when I tried a test upgrade before but since then I've refreshed the hardware in the cluster and our config is a little different now. Previously we had a RAID 0 array and everything lived in the default paths. For the rebuild we got SSDs so I have the OS on a RAID 1 array with a partition for logging on there too and then the data lives on 2 SSDs mounted as /data1 and /data2

When I run the pre-upgrade checks on the cluster now I get

Default path settings are removed
This issue must be resolved to upgrade. Read Documentation
Details: nodes with settings: [node3.mydomain.local]

It doesn't seem to flag issues with node 1 or node 2 only node 3, they were all built from the same scripted install 2 weeks ago so they all have the same configs and if I open up the elasticsearch.yml on each I can see

# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: [/data1/elasticsearch, /data2/elasticsearch]
#
# Path to log files:
#
path.logs: /logs/elasticsearch

from what I read here in the referenced documentation

https://www.elastic.co/guide/en/elasticsearch/reference/6.0/breaking_60_packaging_changes.html#_default_path_settings_are_removed

that would suggest that all I need to do is specify path.data and path.logs which I have so everything is set as it should be but the health check says I need to fix something on just that node before I can continue.

any insights appreceated

I've tried to change the notation to

path:
  data:
    - /data1/elasticsearch
    - /data2/elasticsearch

with no joy

I tried turning off the node that reports the error and re-running the checks but it reported on a different node which on checking was the master.

I'm leaning towards just ignoring the warning and proceeding but if anyone (from elastic support or otherwise) can clear this up for me I'd be greatful!

Should anyone else have the same issue and stumble on this, I've run the upgrade process on a test environment and while I had to delete the kibana index to get that to run, elastic did upgrade. If I didn't upgrade the security index before I upgraded elasticsearch then I got locked out the cluster but I left all the other indexes till post elastic.

I don't feel the fact I had to delete the kibana index is related to multiple data paths and I suspect the reason it flags an issue is becuase the check looks for a notation format assuming a single data path not many. It may also be the the upgrade checker for later versions of 5.6.x (5.6.7 or higher) address this issue although for the most part I suspect people who are going to upgrade already have by now.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.