Message |
|
Remember RAM Doubler from Connectix back in the 90's? It combined compression with virtual memory. I had a friend who worked on it back then and one of the principles which it was built on was they discovered it was always faster to compress/decompress data -- in this case memory -- than to write it out to disk, by a magnitude of order. Microsoft bought Connectix in 2003, primarily to get Virtual PC. Connectix's patents on memory compression expired, and recently Apple used stuff from them to implement Memory Compression in what, Yosemite?, because it was faster to compress memory than to swap it to disk. I actually found that the fastest way to move a large file (40+ GB) file from a USB-2 connected disk to my laptop is to Restore it from a QRecall archive to the target disk rather than do a straight copy. I assume this is because the archive is compressed and thereby takes fewer I/O operations to "read" the file than to do a Finder copy from the external disk? (I would never have guessed I'd use QRecall as a "faster than Finder file copier" )
|
|
|
Ah, so that helps explain why the first backup of the new disk wasn't as fast as an initial capture -- the need to read the earlier backups for de-duplication. It never appeared to be stuck, just varying in speed. Thanks for the explanation! BTW, does the existing quanta need to be decompressed for the comparison, or does the comparison operate on the compressed data? I guess I'm asking if the de-duplication process is slower if the archive is compressed?
|
|
|
I replaced a terribly slow internal HD in my laptop with a new SSD of approximately the same size, capturing the "old" volume(s) one last time before the swap. Then I cloned the old drive to my new using Carbon Copy Cloner. Tremendous difference in SSD vs. HD performance experienced in computer snappiness. I also renamed the new internal SSD volumes to avoid confusion with the removed hard drive (now living in an external enclosure(. Then I went to capture the new volume with QRecall. Since the newly installed and partitioned SSD has both different internal ID and different volume name, I expected QRecall to capture it as a new volume but since its contents are almost completely identical to the prior replaced volume, I expected the Capture to find 99% of the data already in the archive (it turned out to be 98.69%) and complete very rapidly. So I was surprised when it took over 4 hours to capture 167.7GB since it actually only needed to write 1.53GB. Most surprising was the variance in speed it reported. Sometimes it reported "1.63GB per second", but sometimes only "7.28 *MB* per second" -- that's quite a magnitude variance(!). The average rate was 687 MB/min. I'm curious why it sometimes dipped into the single MB/sec digits. Relevant facts are: the backup archive is 897 GB, on an external drive connected by USB-2. Shifted Quanta Detection is off, Capture Compression is set to maximum, Data Redundancy is None. 1,278,260 items were captured. Doing the math, it looked like 670 MB was saved by compression.
|
|
|
I usually give my internal boot volume partitions the name of the OS X version they contain. For example, I currently have two bootable partitions on my internal hard drive: "Lion" and "Snow Leopard". But I want to change the name "Lion" to reflect I'm really running El Capitan. I know I can do this in Finder and nothing will break since an internal ID is used internally, not the displayed name. But if I do this, what will be the effect on the captured Qrecall archives, both in regard to previous layers and future captures? I thought I'd ask before potentially getting into trouble...
|
|
|
B19, OS X 10.11 Don't know if I'm misunderstanding Qrecall's capabilities or if this is a bug... Background: Have both a laptop and a desktop Mac running same software and much of the same data on both so I thought I could use a single archive and take advantage of QRecall's de-duplication to drastically reduce the amount of data I backup and the need to have separate backups, one for each computer. My QRecall archive "Backups.quanta" resides on an external USB drive which I move between computers. Each computer has an internal drive partitioned into two uniquely named volumes. Laptop has volumes "A" & "B", iMac has "C" & "D". Both running same QRecall beta level. Plug external drive into iMac, create a new archive and capture "C" & "D". Captures complete with no errors. Viewing archive shows disk icons for "C" & "D" as expected. Quit Qrecall, unmount external drive and now move & plug it into the laptop. Open QRecall, open existing archive. Viewing archive shows disk icons for "C" & "D" as expected. Now, attempt to capture volume "A" or "B" -- the capture starts, captures the 200+GB (identifying about 90% as duplicate). A new layer is created but the newly captured volumes (A and/or B) *do not* appear in the "Backups.quanta" window! It is not possible to browse them or access the data on them even though QRecall captured them. The Log file shows "Capture to Backups.quanta", no error messages, but no message it finished. Huh? Where are they and their data?
|
|
|
I'm an old user of several backup programs (some enterprise market, some personal) who has mainly be using Apple's TimeMachine to backup multiple computers including one having Window's "disks'' (50+Gb HFS+ files) virtualized under Parallels. Needless to state, using TimeMachine has certain shall-we-say "challenges", especially the lack of the ability to consolidate backing up multiple machines to a single archive and the need to back up an entire multi-Gb file if a single byte changes. Deduplication (and therefore, QRecall) should be my friend, so I've jumped onto the Beta hoping to become a user. But QRecall pales in comparison to TimeMachine in one important aspect: Impact of QRecall on system usability. I rarely notice when Time Machine is doing it's hourly stuff,, but QRecall severely impacts my computer. Suddenly, my Mac's operations like web browsing, file manipulation -- anything which requires disk access -- slow down and I wonder "What the heck's going on?" and the answer is QRecall. Especially for operations like compacting an archive or verifying. (1) Time Machine sets the "use low-priority I/O" flag which gives precedence to non-Time Machine I/O, thereby lessening the impact of a backup. Could this be made a settable option in QRecall? (2) There are times I *really* need all the performance I can get from my Mac (like when running Windows in a virtualized environment) and many of QRecall's operations take *hours* to complete. I've learned the hard way quitting QRecall in the mist of a long operation (like a compact) may have dire consequences! Could we *please* have a "Pause" button? Something which would pause QRecall until a "Resume" button is pressed? (OK, it could nag the user every X minutes when paused to resume so its paused state wouldn't be overlooked). Thank you.
|
|
|
|
|