Message |
|
Jeffrey, The answer is "kinda," at least for files that have been deleted. In the QRecall browser window there are two commands: Edit > Select existing items and Edit > Invert Selection. You can see where this is going...
Browser the archive to the folder where you deleted files.
Choose Edit > Select existing items
Choose Edit > Invert Selection The remaining items selected are the ones that exist in your archive but do not exist in your filesystem. Occasionally I get requests for a restore "pre-flight" feature that will definitively list the items that would be restored. It's tentatively on the 3.0 development list, but we haven't gotten to it yet. Quick poll: Would a "preflight" feature still be useful if it was only available from the command line?
|
|
|
Bruce Giles wrote:The folder itself gets backed up, but nothing in the folder does, so that's fine.
If you want to exclude the folder too, use this pattern: /Users/bgiles/Documents // Virtual Machines ☐ ∞
|
|
|
All display names and icons in the archive browser window are cosmetic. When a folder is displayed in the archive window, the items names are displayed as soon as they are assembled, while cosmetic information (like the icon) is collected on a background thread so you can start working with the items as soon as possible. Eventually the background thread will load the icon data and update the display. But it's just that: a display. Icon and localization information that affect how an item appears in the Finder is collected during the capture but only so that QRecall can, as accurately as possible, show you how the item appeared when it was captured. But that display information isn't actually part of the item and when the item is restored it will be, once again, be the job of the macOS operating system to determine its icon and localized display I hope that explains what's going on. I you still think there's a problem with the display, please take some screen shots and either post them here or send them to support@qrecall.com.
|
|
|
First, let's back up a step. Please describe the file(s) that QRecall is keeping open that are preventing your volume from unmounting. None of the QRecall background components should be keeping any open files on external volumes, except the binary executable files themselves ... and those shouldn't be on volumes that would ever be unmounted while the system was running. The scheduler and monitor will occasionally check the reachability and status of archives and it may watch for changes in certain directories, but neither of these should prevent a volume from unmounting. I also noted that the path /Volumes/xxxx/Users/yyyyy/Library/LaunchAgents/com.qrecall.scheduler.plist might indicate that your home folder is not on the startup volume, which could complicate your installation. Finally, launchctl unload is a legacy command that's no longer supported; I'm not even sure it would work. Even if it does your command is targeting the wrong session type.
|
|
|
It's probably a bookmark issue. Excluded items are stored as bookmarks (which replace aliases), but like aliases changes to volume identifiers and whatnot can cause them to lose track of their original item. As an alternative, try using an exclusion pattern, since this isn't a folder that's likely to get renamed or moved. In the Patterns section, add a new "glob" pattern, enter the path you want to exclude on the left and make the pattern to exclude * on the right, like this: /Users/bgiles/Documents/Virtual Machines // * ✓ ∞ That should permanently exclude every item in your Virtual Machines folder forever.
|
|
|
Jessie, The new backup model is a feature called "stacks". The idea is that QRecall will isolate all of the data that makes up an individual layer in the archive, which we're calling an archive "slice". You then periodically upload slices to a stack, making that stack an archive of your archive. We plan on making the stacks as flexible as possible, both in terms of where they can be stored and how often they get updated. The stack can be as simple as a second hard drive, but it could also be on a local server, on a remote server, a pile of (re)writable DVDs, a folder in your cloud storage (Apple, Goggle, ...), DropBox, AWS S3, and maybe more. We're toying with the idea of providing a managed network-based storage service available for a monthly fee, but the primary tools will always be available for you to decide, control, and manage your own stack(s). We were hoping to get this in beta this fall, but Catalina has seriously disrupted our development schedule. Watch for a beta announcement on the forum or just check the download page on the website.
|
|
|
Bruce Giles wrote:So those two requests for access were unexpected, and caused a significant delay. I'm hoping this was a one-time thing, and the problem won't recur.
It shouldn't reoccur, at least for the questions you've already answered. But I'm actually surprised it interfered with the capture process. It shouldn't; the QRecallHelper and QRecallScheduler processes are largely independent of one another and blocking the scheduler shouldn't prevent capture from running unabated. And, as you might guess, this is a security feature and there's no way to disable or get around it. So during your initial upgrade you'll just need to answer "OK" to any access questions.
By the way, when I moved the archives, they appeared to copy rather than move. I had a copy in /Users/Shared, and I had another copy in the "Relocated Items" folder. My archives were large enough that this shouldn't have been possible -- I didn't have enough free space on the drive to maintain two copies of the archive files. When I deleted the copies in the Relocated Items folder, I checked the free space on the disk and noticed that it didn't change. Apparently the duplicate copies never occupied any space. I'm guessing this is one of the new tricks that APFS provides.
Yes, this is an APFS trick. In APFS, copying a file (by default) creates a "clone" of that file—both files share the same set of data blocks. A lazy "copy-on-write" feature makes the actual copy, on a block by block basis, should you modify either of the files.
|
|
|
There is now a Catalina (macOS 10.15) compatible version of QRecall available. If you are already running Catalina, QRecall will suggest this update automatically. If you are not yet running Catalina you can download it manually, although you won't (shouldn't) notice any difference. Please read the release notes as it contains important information about how system volume captures and restores have changed.
|
|
|
Bruce Giles wrote:Or does Reindex do something that Repair doesn't?
Both reindex and repair rebuild all of the index files (using the same code). The only difference between a reindex and repair is that the reindex doesn't touch the master repository.data file. A repair opens the master data file read+write and will erase or repair any damaged records or inconsistencies that it finds. A reindex opens the master data file read-only and simply fails if it finds any inconsistencies.
|
|
|
Steven J Gold wrote:Any known issues with QRecall under Catalina?
Other than "it won't work at all," things are looking pretty good. But seriously, a Catalina compatible version is in the works. It's been a long road because Catalina breaks many basic assumptions about the filesystem and volumes. In a nutshell, Catalina splits a bootable macOS volume into two volumes: a read-only system volume and a read-write "data" volume, where all of the modifiable files get stored. It then uses a new filesystem construct (called a "firm" link) to blend the contents of the two together so it appears to be a single volume. Since the idea of backup software is to capture the files you'd want to restore, the new QRecall captures only the "data" half of a system/data volume pair. This actually accomplishes a long-term goal of QRecall, which was to isolate just the files you need to restore a bootable volume and not capture any of the immutable system files (that the OS installer would simply overwrite anyway). While conceptually simple, this has resulted in large number of adjustments to the software. QRecall has always been "device" oriented, capturing and restoring all of the physical files on a single volume. So the idea that all of the files on a volume are, well, on that volume is deeply ingrained in the software. But we have made a lot of progress. I can't guarantee it will be ready to for the Catalina release, but it should be close. I hope to have a beta within another week or so.
|
|
|
Well, color me dumbfounded. Bruce, you were absolutely right. This is not a snapshot issue. TL;DR Try reindexing the archive The diagnostic report pinpointed what was eating up the time, but I have absolutely no (definitive) explanation as to why. So here's what's going on (technical details). The archive maintains a very (sometimes very, very) large hash table used to search for duplicate data that's already been added to the archive. This table is so big, it's impractical to make a copy of it every time you perform a capture. So when QRecall captures a modest amount of data, instead of copying and updating the master hash file, it writes the handful of updates to an "adjunct" hash file. This adjunct hash file is read in again when the next capture starts, on the theory that the adjunct file will be orders of magnitude smaller than the master hash file. Eventually, the adjunct hash entries will exceed a threshold, QRecall will "bite the bullet," making a copy the master hash file and update it. At this point there are no adjunct entries and the whole thing starts over again. So back to the problem. Your capture is getting stuck reading in the adjunct hash entries. Here's the (interesting part of the) sample trace:
+ 9814 -[CaptureCommand execute] (in QRecallHelper)
+ 9814 -[RepositoryCommand prepareRepository] (in QRecallHelper)
+ 9814 -[RepositoryPackage prepareWithMode:] (in QRecallHelper)
+ 9814 -[DataHash prepareWithMode:] (in QRecallHelper)
+ 9317 -[DataHash loadAdjunctEntries] (in QRecallHelper)
+ ! 9315 -[NegativeChecksumMap add:] (in QRecallHelper)
+ ! 2 -[DataHash addEntry:] (in QRecallHelper)
+ ! 2 -[DataHash insertEntryIndexIntoHash:forChecksum:] (in QRecallHelper)
+ 497 -[DataHash loadAdjunctEntries] (in QRecallHelper)
+ 497 DDReadFile (in QRecallHelper)
+ 497 read (in libsystem_kernel.dylib) + 10 [0x7fff5df83ef2]
During the sample period, 5% of the time was spent reading the adjunct file and 95% of the time was spent inserting those entries into the in-memory hash cache. And here's where it gets weird. That insert function ([NegativeChecksumMap add:]) is literally 5 lines long. It consists of an exclusive or, a bit shift, a mask, an address calculation, and an add. A modern CPU should be able to do several hundred million of these a second. It should be so fast, it shouldn't even show up in the stack trace. Yet, it's accounting for 95% of the delay... My only guess is that it might be hitting virtual memory, assuming there are other large (memory footprint) processes running at a the same time. Or, the negative map has been mapped into memory and the page loads are just really, really, slow for some reason. Basically what I'm saying is that VM paging/contention is the only thing I can think of that would account for this miserable performance. So that's the problem. One "solution" would be to reindex the archive. A reindex will rebuild all of the index files, including the hash file, from scratch. At the end, the hash file will be complete and up-to-date and there's won't be any adjunct entries to read or write. Of course, this just kicks the problem down the road as the adjunct entries will, again, start to accumulate as small captures are completed. But start with a reindex and see if that resolves the problem.
|
|
|
Bruce, If you suspect that snapshots are not the problem, the next step would be to send a diagnostic report during that 10 minute period. The diagnostic report will take a sample of all running QRecall processes. If the capture action is stuck, the report should pinpoint exactly where.
|
|
|
I'd be very curious to see what you discover. The magic tool is tmutil. It has a localsnapshot command to create a local snapshot of all APFS volumes. (I'm not aware of any way of creating a snapshot for a specific volume, the way QRecall does.) You can also list the snapshots on a volume (listlocalsnapshots and listlocalsnapshotdates), and delete them by date (deletelocalsnapshots). Both QRecall and macOS are responsible for deleting their stale snapshots, so any snapshots you create will eventually get deleted.
|
|
|
Bruce, I have noticed that the Mojave snapshot process can be quit lengthy at times. I'm not sure exactly what the criteria is, but I suspect it competes with other changes being made to the volume at the same time. I noticed this first when I was stepping through a test version of the capture action using the debugger and thought the process had deadlocked. I paused the executable to find it waiting in the create_snapshot() function. After about three minutes, it finally finished and went on. If this is a problem for some reason, you can still disable snapshots by turning off the "Capture a Snapshot" option in the advanced settings. Just be warned that this option might go away in future versions, because without taking a snapshot capture is deeply problematic in Mojave and later.
|
|
|
Actually, that's a pretty good solution, espeically if you're using APFS. By default, copying a file in APFS makes a clone of that file; essentially a "snapshot" of the file that doesn't use any additional storage (until one of the files is modified). So copying your found files into a folder, capturing that folder, and then deleting those files should be remarkably fast and efficient. Pro tip: I can see at least two ways of automating this. (1) The first would be to automate your copy routine to copy your recent files into a fixed folder, then set up a QRecall action to capture that folder whenever it changes. As soon as your copy is done, the capture action will take off. (2) Same capture action, but run on a schedule. Then add a prolog script that performs the find and copy before the capture runs.
|
|
|
|
|