Maintaining consistency across denormalized data

Hi everyone,

Currently in my application, I am introducing elasticsearch onto the platform that is already supported by a large database with many tables. I only need search for a small but important facet of the application, so I'll only be adding a couple indexes to speed up a few parts of the application. One is a basic user table, which I'm using for geo location sorting which works great. The other is a very large index, which is the bread and butter of the application. This index is currently created off many sql tables (some very static, others not so much) and I'm currently starting to fear for the worst when it comes to maintaining this.

I know there are techniques such as parent-child mapping, which I can see working for some situations and not for others. As a generic example, i wouldn't say that McDonalds isn't a child of my car because it is the current residing location of it. But I could say that a purchased item of an online order is a child of that order. Maybe this is all just semantics and I'm making it overbearing. Is child-parent mapping is the best course of action, or just make sure I update every place that reference these "child" objects.

Anyone have a preferred method of maintaining consistent data across denormalized documents?

TLDR; Need to maintain consistency across large denormalized documents for search across core data. Child-parent mapping doesn't seem like the right path because some of these nested objects don't semantically appear to be child/parents.