If yes, then that’s a great help as im looking at an enormous amount of data and the true source PC (B in the above) is 500miles away… On the first backup, all the content goes in to the target from machine A, then the second machine really only checks that all the required blocks are at the target, sending only the hashes for each block and saving a lot of time and bandwidth. Because duplicati uses a consistent hashing algorythm and deduplication algorithm, I can have a single backup set on my target machine (SSH/SFTP target) accessed by two backup source machines which each backup the exact same files to the target. I can restore the files from crashplan to any machine and back them up using duplicati. Note that while similar to the Topic below, that doesn’t work due to cross-OS differences (which are not a factor here): until I’d actually migrated my CrashPlan HISTORY to Duplicati in the historical correct order, though with artificial historical dates. IF this metadata change were necessary, would that cause the equivalent of a full backup or just blocks associated with the meta data?Īssuming all the above actually works, if I wanted to really beat myself up I could (in theory) going from oldest to newest restore a CrashPlan version, do a Duplicati backup, restore the next CrashPlan backup, do a Duplicati backup, etc., etc. The only issues I can think of would be metadata related (different user accounts on the various machines) and I’m hoping that after the initial “restore based” backup the next “live computer” backup would re-set the metadata to the correct content.
0 Comments
Leave a Reply. |