Submit Hint Search The Forums LinksStatsPollsHeadlinesRSS
14,000 hints and counting!


Click here to return to the '10.5: Use multiple Time Machine disks for redundancy' hint
The following comments are owned by whoever posted them. This site is not responsible for what they say.
10.5: Use multiple Time Machine disks for redundancy
Authored by: palahala on Jun 04, '09 01:22:55PM

Some more notes:

Apparently, Time Machine stores the last known event ID on the backup itself, in an extended attribute com.apple.backupd.SnapshotVolumeLastFSEventID.

So: I doubt the suggested corruption could occur.

And a nice article for developers: Monitoring File Changes with the File System Events API, using some lastEventId to find all files that have changed.



[ Reply to This | # ]
10.5: Use multiple Time Machine disks for redundancy
Authored by: SOX on Jun 04, '09 04:43:42PM

You could answer this in part by doing this:

sudo bzgrep -i backupd /private/var/log/system.log.*.bz2 | grep travers

According to the article you linked to if you don't get a "Node requires traversal" message every time you swap drives then it may not be detecting the out-of-sync condition.

as a further check you could also try

sudo bzgrep -i backupd /private/var/log/system.log.*.bz2 | grep from

and see if there's a huge number of megabytes copy every time you swap drives.



[ Reply to This | # ]
10.5: Use multiple Time Machine disks for redundancy
Authored by: palahala on Jun 05, '09 01:57:28AM
if you don't get a "Node requires traversal" message every time you swap drives then it may not be detecting the out-of-sync condition

My point is: Time Machine gets the com.apple.backupd.SnapshotVolumeLastFSEventID attribute from the backup disk. After swapping disks, this event ID will be lower than the event ID used by the previous backup. This is exactly the same when using two disks like in your hint. Next, TM can simply ask fseventd for the changes since that (lower) last known event ID.

There is no "out-of-sync condition" (other than the fseventsd database having been recreated for unrelated reasons, which requires a deep traversal for both backup disks once plugged in at some later time). One should NOT expect any "Node requires deep traversal" when swapping disks, not when using your hint, nor when using cloned disks.

(And yes, like I wrote: there is a huge number of megabytes copy every time I swap drives, easily noted in the logs with Time Machine Buddy, or by looking a the files that have been copied using TimeTracker.)

[ Reply to This | # ]

10.5: Use multiple Time Machine disks for redundancy
Authored by: SOX on Jun 05, '09 08:51:07AM

Perhaps I'm mistaken but my understanding of this is that in order to recover from a condition where the fsevents log has lost track of what need to be updated-- which will be the case here-- then a deep traversal is required to compare the current state of the main disk to the last backup on the old disk. That's in fact what the page you lined to says.

Thus when you swap in a new disk two things have to happen. First something has to trigger the detection of the incomplete fsevents log, and second a deep traversal has to occur.

When you manually repoint the time machine to a new disk it knows for sure it has to recatalog the disk. But when you simply swap disks that masquererade as each other using the cookie trick then you must reply on some secondary check to trigger the detection that the fsevents lof is out of sync with the disk. You are speculating that it can do this by looking at the lastudate events UUID and seeing if this is still in the fsevents log somewhere. It's possible this is true, I don't think either of us knows.

So what I was asking you to test on your system was: if is true then you should be seeing a node-travesral required message or at least some other message about the detection of this condition.

I'd also be curious to know what you think the purpose of the cookie is and what the negative consequences of removing it are.



[ Reply to This | # ]
10.5: Use multiple Time Machine disks for redundancy
Authored by: palahala on Jun 05, '09 09:56:44AM
a condition where the fsevents log has lost track of what need to be updated-- which will be the case here--

No, that is not the case here... Time Machine knows very well what data is on each backup disk. It then asks fseventd or some related API what has changed since.

When you manually repoint the time machine to a new disk it knows for sure it has to recatalog the disk.

If you're saying that you see the "Node requires deep traversal" message each time you manually assign another disk, then something is wrong on your Mac.

You are speculating that it can do this by looking at the lastudate events UUID and seeing if this is still in the fsevents log somewhere. It's possible this is true, I don't think either of us knows.

That's not speculating, that's exactly what is described in each article I mentioned earlier (though it's not the log's UUID but the FSEventsID counter). Time Machine is not keeping track of any changes. It doesn't have to, as long as it knows the last ID it used when writing to some backup.

So, I don't see anything confirming your "actually slowly corrupt your backups".



[ Reply to This | # ]