Message |
|
David, It's likely that the cause is one of two reasons. First, both Time Machine and QRecall exclude certain items. Both have an internal list of items that are never backed up along with a mechanism to arbitrarily mark items that should be excluded. For Time Machine, this "do not backup" attribute is a service of the operating system. Applications can, for example, mark files or folder that should not be backed up. Typically this is because they contain superfluous or transient data (like a video edit's render files). Time Machine will always exclude items marked as "do not backup" by the operating system. QRecall will exclude them too, if you have the "Exclude items excluded by Time Machine" option set in the archive's settings. When this option is set, QRecall uses the same OS attribute that Time Machine does to exclude items (in addition to any other QRecall exclusion settings you might have set). Since these two VM files don't appear anywhere in QRecall, I'm assuming that this is your issue. The solution/test (for QRecall) is to turn off the "Exclude items excluded by Time Machine" setting. QRecall will stop using the operating system's exclusion attribute to make capture decisions. The down side of this change is that the operating system does mark a lot of things that really don't need to be backed up, and now QRecall will ignore that advice and capture them anyway. If this archive only captures your virtual machine files and nothing else, then turning off this setting won't capture too many superfluous files. If this archive also captures the rest of your system, then I would suggest splitting your archive into two: - An archive with the "excluded by Time Machine" setting on that captures your entire system, but excludes your VM files. - A second archive with "excluded by Time Machine" setting turned off that captures just your VM files. This has other advantage too, such as the ability to trigger a re-capture of your VM files when you quit your VM software, or prevent QRecall from trying to capture your VM files while your VM software is running (which will undoubtedly result in an incomplete capture of your VM state). The second possible issue is how OS X tracks filesystem changes. Both QRecall and Time Machine depend on the filesystem change history to quickly determine what portions of the filesystem have been modified. There are, however, well-known limitations in how the filesystem change history works that makes it blind to certain kinds of changes, and there are a few VM software packages out there are are notorious for making just those kinds of changes. The result is that neither Time Machine nor QRecall see the changes in your VM. QRecall combats this deficiency with two features. First, it has a "Filesystem Change History Trust" setting (in the Advanced settings pane). Periodically, it will ignore the filesystem change history and exhaustively scan the entire folder hierarchy looking for modified items. The default interval is once a week. You can edit this interval, or set the "Deep Scan" option in the capture action. The later will force QRecall to perform an exhaustive search for changes every time that capture is run. The capture option was specifically designed to let you create a separate capture action for just your VM files that always ignores filesystem change history, while your other captures will continue to take advantage of it. But since you say that your VM files are not captured at all, I suspect that the filesystem change history issue isn't your problem (although you might discover that it's a problem once you start capturing your VM files).
|
|
|
John, You're not alone. This has been on the to-do list for awhile, it's just a matter of coming up with a cogent UI and an efficient algorithm for performing the pattern matching during the capture process. As a hack, it's sort of possible to do this now using the qrecall command line tool. You can use the tool to set the capture preferences for a file, and you can use the shell script to perform the pattern matching. Thus, it's possible to execute a script like this to manually mark all *.mov files your Document/Projects folder so that they are never captured:
find ~/Documents/Projects -iname '*.mov' -print0 | xargs -0 echo qrecall captureprefs exclude Of course, this isn't dynamic. Every time you create new *.mov files you'd have to re-run this script to mark the new ones, but it's better than nothing. I should also note that many modern applications will correctly mark intermediate files (rendering cache files and so on) using OS X's "do not back up" setting. If you set the archive's "Exclude items excluded by Time Machine" option, QRecall will honor those hints and automatically exclude those items.
|
|
|
My working UI design is to allow you to attach a note to an item (file or folder). That note would appear in the inspector window when you selected that captured file(s), and a summary of all notes would appear in the inspector panel when you selected a layer.
|
|
|
This feature might not be that far away, although the design of the UI has changed a couple of times. Right now I'm favoring an implementation that lets you attach short textual notes to items the same way you set the capture preferences. So, for example, you could make a major change to a file and then attach a capture note. The new version, and the note, would both be added to the archive when the file was captured.
|
|
|
Ah, that's actually a distinction I wasn't considering?the difference between having an encryption key installed and ensuring that all of the data in the archive has been encrypted. Thanks for the clarification.
|
|
|
I can't see what utility that would have, but I'll put it on the wish list. Feel free to enlighten me.
|
|
|
Steven J Gold wrote:Remember RAM Doubler from Connectix back in the 90's?
Wow, does that bring back memories. It also reminds me of Stacker (from my bad MS-DOS days).
Connectix's patents on memory compression expired, and recently Apple used stuff from them to implement Memory Compression in what, Yosemite?, because it was faster to compress memory than to swap it to disk.
Startup drive I/O is pretty fast these days, so I suspect this is more to conserve disk space than for performance. Most boot drives use eSATA that can write 100MB/s, and 300MB/s is no longer uncommon. The newer SSDs can move data as fast as 500MB/s. Even with a fast 6-core CPU, it would be tough to compress 300MB of data in less than a second.
I actually found that the fastest way to move a large file (40+ GB) file from a USB-2 connected disk to my laptop is to Restore it from a QRecall archive to the target disk rather than do a straight copy. I assume this is because the archive is compressed and thereby takes fewer I/O operations to "read" the file than to do a Finder copy from the external disk? (I would never have guessed I'd use QRecall as a "faster than Finder file copier" )
That's very cool, and makes perfect sense.
|
|
|
Steven J Gold wrote:BTW, does the existing quanta need to be decompressed for the comparison, or does the comparison operate on the compressed data? I guess I'm asking if the de-duplication process is slower if the archive is compressed?
Short answer: yes, compression adds overhead, which means it's probably going to be slower. Uncompressed file data is used to search for a duplicate block in the archive. If found, the quanta is decompressed and compared with the file data. If no match is found, the file data is compressed and written to the archive. This is potentially faster than first compressing the file data, because decompression is always faster than compression. In other words, QRecall avoids compressing a block until it has to. Having said that, if you have really slow archive access (USB, slow network, ...) and a relatively fast (multi-core) computer, it's possible that using compression can actually speed up actions, if the amount of time the computer takes to decompress the data is less than the amount of additional time it would have taken to read an uncompressed record from the archive.
|
|
|
Welcome to the club. I upgrade my workhorse Mac Pro to and SSD last year and can't imagine life without it.
Steven J Gold wrote:... I expected QRecall to capture it as a new volume but since its contents are almost completely identical to the prior replaced volume, I expected the Capture to find 99% of the data already in the archive (it turned out to be 98.69%) and complete very rapidly. So I was surprised when it took over 4 hours to capture 167.7GB since it actually only needed to write 1.53GB.
QRecall wrote 1.53GB of data, and read 335.4GB of data. Remember that de-duplication requires that every block of every source file be looked up in a gigantic database of captured quanta. Once found, the archive record containing the captured quanta is read and compared, byte-for-byte, with the data block in your file to ensure they are identical. So even if the files you're capturing are 100% duplicates of what's in the archive, it still means QRecall has to read all of that data twice (once from the files and again from the archive). Most of the capture speed improvements come from anticipating the data being captured or determining that a file is already captured and not reading it all. Both of those optimizations only happen when items are recaptured; they never happen during the initial capture.
Most surprising was the variance in speed it reported. Sometimes it reported "1.63GB per second", but sometimes only "7.28 *MB* per second" -- that's quite a magnitude variance(!). The average rate was 687 MB/min. I'm curious why it sometimes dipped into the single MB/sec digits.
QRecall has a lot of moving parts. It's really hard to tell what's going on from one moment to the next. Sometimes the capture needs to pause while directories are pre-scanned, hash tables are updated, record number indexes are pruned, or a glut of empty records are being erased. The bottom line is, unless you perform a sample of the QRecallHelper process while it appears to be stuck, I can't tell you exactly what it was doing (not that it's usually that interesting anyway).
|
|
|
Sorry to hear your trial key isn't working. It's likely due a quirk (bug) in OS X. Starting around OS X 10.9, access to preference files are coordinated by a background process named cfprefsd. This is actually a good thing, and solves a lot of issues with multiple processes stomping on each other's preference values. Unfortunately, it also introduced a bug where one process can't see the changes made by another process for long periods of time, sometimes indefinitely. When you entered the trial key it was stored in your QRecall application preferences, but the helper process (the one that actually performs a capture) can't see it. Usually a restart of your system is all that's needed to clear the log jam. Try that and see if the trial key starts working. (Killing the cfprefsd might work too, but I've had reports that sometimes that isn't enough.) If that doesn't work for some reason, send a diagnostic report: in the QRecall application choose Help > Send Report… and we'll look further into it.
|
|
|
If your snapshot files (which tend to be large) are slowing things down a lot, you might want to check your shifted-quanta capture setting (Archive > Settings...). Shifted-quanta detection will not be beneficial for either disk image or memory images files. So for VMWare, it should be set to its lowest setting or just turned off. If that doesn't help, there are additional things to try. It's also unlikely that QRecall is spending a lot if time capturing .lck files. These tend to be very small semaphore files. It's more likely that a .lck file was just the last thing it captured while looking for the next thing to capture.
|
|
|
It was just coincidence then. The timing (action started running at 10:03:37 and the volume mounted at 10:03:40) made it look like the volume had been automatically mounted by QRecall.
|
|
|
The errors you're getting when the volume is mounted using AFP are strange indeed. I'm not entirely sure what to make of them. A prime example is the capture that failed today. The archive couldn't be opened because the length of the negative hash map file was reported to be -4 by the operating system. I don't think I've ever seen a negative file length. I will also note that the volume containing the archive was not mounted when the action started, so QRecall mounted the network volume automatically. (This also appears to have caused two other volumes to mount at the same time.) All three volumes were mounted using AFP. Because of this, I might offer the possibility that when OS X mounts the volumes they get mounted using AFP, but when your utility mounts the volume they get mounted using SMB.
|
|
|
I can't offer any advice as to why your NAS suddenly decides to mount your volumes using AFP, but the vendor might. I would be interested in investigating why your archive is getting corrupted and needs repair. If you can, please send a diagnostic report from the computer that is mounting the volume using AFP and trashing the archive. Apple File Protocol (AFP) has a few known bugs, and older versions have a bunch of bugs. So it might depend on what version of AFP your NAS is running. If you're still running QRecall 1.2, there are some advanced setting designed specifically to work around some of these bugs. QRecall 2.0 uses a different filesystem API that was supposed to put this kind of incompatibility behind us, but depending on the version of AFP there still might be issues. The two biggest stumbling blocks in APF are a pre-allocation bug that ends up filling the volume and a file-size limitation. The pre-allocation bug can be worked around. But if it's a very old version of AFP that can't write really large files, there's nothing QRecall can do about that.
|
|
|
Ralph, Yikes, that's a lot of errors. Most of the errors do appear to be related to network communications. Most of the captures/verifies/repairs that fail predomenatnly report POSIX error 60 or 6. Error 60 is an "operation timed out" error, usually associated with a network communications socket or device channel. Error 6 is a "no such device" error; in this context it usually means the volume/drive being addressed is no longer connected. A lot of your failures follow the pattern of getting "timed out" errors, later followed by "no such device" errors. I suspect you're having network communications or remote storage device problems that initial stop responding to requests, and later appear to go off line. You can see this in events such like that starting on May-12 where the first capture starts but dies with an error 60. Subsequent actions then fail because they can't access any archive files (error 6). I also see that your archive's volume tends to get mounted and unmounted a lot. At first I thought this might be an indication of a problem, but the timing wasn't quite right. Instead, it seems to be by design; you apparently mount the volume and then manually start a capture action. (Just FYI: if the volume can be automatically mounted, QRecall will mount, and unmount, the volume for you.) I did find one really suspicious sequence of events that I think lead to all of the problems on May-9. The volume was mounted at 12:23 and the capture action was manually run a few seconds later. The capture action ran until 13:17, at which time it encounted network timeout errors (60) that prevented it from finishing. But before that, it appears that the system was put to sleep:
2016-05-09 13:16:30.695 -0700 Power Management will go to sleep
2016-05-09 13:23:29.285 -0700 Power Management did power on (Please note that there's another possible problem here. Starting somewhere around OS X 10.10, the kernel is letting background processes run in the so-called "power nap" mode, where the system is mostly asleep but some background processes are still running. Unfortunately, QRecall seems to be one of the processes that it's allowing to run, but not enough of the rest of the system is awake to function correctly and errors ensue.) It's been my experience that network sockets don't like to be put to sleep, and can take quite awhile to recover when the system wakes up again, which is probably the source of that particular failure. Without anything definitive to go on, I'd recommend trying to isolate the pieces one at a time and see if you can find some improvement. First, is it possible to eliminate the network and server for a trial and connect the archive drive directly to the system, just to make sure it's not the drive or something else? Then try to replace pieces one at a time to see if that makes any difference. Try a different network connection. If you're using WiFi, try to hook up a hard-wired ethernet cable. If you're using ethernet, see if you can use IP-over-Firewire or something. Can you move the archive drive to a different server? I know I'm not being terrible helpful, but I hope it gives you some ideas to try.
|
|
|
|
|