Hi Everyone,
I've also posted this exact same question on the Graylog forum - I'm just edging my bets a bit more by posting here as well as my issue is more ES than Graylog I guess
I have a new cluster and an old cluster and Iām using the reindex API to bring old documents into a new Index on our new server. This works and I can see 20M+ documents under management in the Index once the re-index command finishes.
{"took":6156976,"timed_out":false,"total":20000085,"updated":0,"created":20000085,"deleted":0,"batches":20001,"version_conflicts":0,"noops":0,"retries":{"bulk":0,"search":0},"throttled_millis":0,"requests_per_second":-1.0,"throttled_until_millis":0,"failures":[]}
However, I can only see those messages if I find out the gl2_source_input and search for that:
gl2_source_input : 5a58389f21394d0e92c8f4fd
The messages also report that they have been ā Received by deleted input on stopped node ā which I understand, because that is exactly what is happening, however, I thought that a re-index updated this data as it was imported.
How do I get this data to be searchable via the usual front-end search and not resorting to a search on gl2_source_input?
Is the re-index API the best way to migrate this old data to a new cluster?
I've seen various posts about doing a snapshot and resotre, but it's a bit beyond me at the moment and I'm struggling to get my head around it all. Any advice very much appreciated.
Current setup: NEW
Running 2 graylog nodes, Clustered MongoDB (3 hosts) and Clustered Elasticsearch (3 hosts)
Old Setup:
Running 2 graylog nodes, MongoDB on Graylog Primary, and Clustered Elasticsearch (3 hosts)
I can still reach the old setup and I have enabled the "whitelist remote" option in order to use the API in the first place.
Thank you.
Archie.