Message |
|
That's a great idea ... which is why it's already in the works. However, we can't "track the clone status of files", which is why I haven't done this already. Basically what we want to do is implement the same logic for cloned file that we do for hard links. The ideal solution is to detect when a file is a cloned copy of another file being captured. And when those two files are restored, restore the first file and then clone it to restore the second file. The problem is that (unlike hard links), the filesystem provides no information whatsoever about which files are clones of other files. And even if it did, cloned file can later be modified wherein it becomes a "partial" clone of that other file?with some blocks sharing storage with the original file and other blocks being independent of the original. So the two files could still be different and must be treated as unique files. What we need to do is to build a "cloned file recognition" engine that can detect when a second file is a full or partial clone of another file. But to do that thoroughly requires a massive amount of memory and processing time, since we'd literally have to compare the data allocation of every file with every other file. So the idea on the workbench right now is to (a) limit cloned file recognition to relatively large (multi-megabyte) files and (b) make it probabilistic, so it matches cloned files imperfectly, but is likely to recognize very large cloned files. This would, hopefully, be a reasonable trade off between capture speed and catching a few large cloned files. But I haven't proven it will work yet.
|
 |
|
David, So there was a problem. It wasn't, technically, specific to Big Sur, but Big Sur is what drove QRecall over the edge. The layers.index file stores a compact summary of the directory structure for all of the archive's layers. There was an assumption that a folder wouldn't have more than few hundred subfolders. If a folder has more than 65,000 subfolders, the file would get corrupted, and that's what was happening. The metadata daemon in Big Sur creates temporary folders inside /private/var/folders (the standard UNIX location for temporary and cached data). But unlike Catalina, the Big Sur version creates a lot of subfolders. And I mean, hundreds of thousands of subfolders?all in a single folder. (It's clearly taking advantage of APFS's hashed directory structure, but it was unexpected behavior.) So if you happened to capture a volume while the metadata daemon was working particularly hard, there was a possibility that you'd capture a folder with too many subfolders. The latest version of QRecall (2.2.9), expands the subfolder limit to 4 billion. Update at your leisure.
|
 |
|
Rose,
There are two parts to this. You first download a disk image file, which should appear wherever your downloaded files normally go, typically your Downloads folder.
If you can't open the disk image file, then I suggest trashing it and trying again.
Once you've opened the disk image file, you need to copy the QRecall app to your Applications folder. Optionally, you can open the Install QRecall utility application, which will do that for you.
Once in your Application folder, open the QRecall archive and it will install itself.
If you can't open either the QRecall application or the Install QRecall utility, you might have your GateKeeper security settings to restrictive. In your System Preferences, under Security, go to the General tab and make sure Allow apps downloaded from: is set to App Store and identified developers.
If you have anti-virus software installed, make sure it hasn't blocked or sequestered the app.
|
 |
|
QRecall 2.2 has been tested under Big Sur and appears to be generally functional. We're performed hundreds of captures here at QRecall central. There may be some odd issues with the layers.index file (see the earlier thread), which we're looking into. But for now it appears that repairing the archive will resolve the issue (and this issue does not involve any data loss). As mentioned in the 2.2.8 release notes, there are a few minor cosmetic issues, most of which are being dealt with in QRecall 3.0 (which has not been released yet).
|
 |
|
Update: Probably not a coincidence. So when I said "other users" ... that was me. I haven't run into this error on any of the hundreds of captures performed by my regular machines running Big Sur. But ... I just ran into the same error while debugging some new QRecall 3.0 code. So there's probably a Big Sur related glitch lurking somewhere.
|
 |
|
It might be something to do with Big Sur. Or it might just be a coincidence. The next step is to see if it happens again, or to other users.
|
 |
|
David, Thanks for the diagnostic report. Your logs indicate that there's a corrupt value in the layers.index file. This is an auxiliary index file that summarizes the directory structure of the entire archive. A reindex of the archive should fix it. Your logs also show that you started a repair on 11-16 a 12:25. It was canceled about 50 minutes layer, and it promptly stopped at 13:11. I suspect that starting another repair, and letting it finish, will fix the problem. If not, please send another diagnostic report after the repair.
|
 |
|
David, Sorry to hear you're having problems. I'm sure there are going to be some edge cases to work out, but I have several computers here running Big Sur and they're all capturing files as I type this. Next step is to send a diagnostic report and take a look at exactly what those errors are.
|
 |
|
Everyone safe and healthy here! I hope everyone else reading this is too. FileVault and QRecall play just fine together ... now. In the very early days of OS X, FileVault was implemented as a special mount point, backed by a hidden disk image file that handed the encryption. This complicated things in numerous ways for QRecall. First, QRecall captures volumes, and your home folder was now on a different volume. You couldn't capture files for another user if they weren't logged in. And if you restored your files, FileVault might make them all disappear when you logged in again. *sigh* Mac OS X Lion (10.7) introduced FileVault 2 (officially known as "FileVault," with the earlier version being referred to as "Legacy FileVault"). FileVault 2 encrypts the entire volume at the block level and is completely transparent to the filesystem and QRecall. If you want the same level of protection for your QRecall archive, you can enable encryption. (Make sure you make a backup copy of your encryption keys and keep them safe.)
|
 |
|
Yes, this year has not been great. However, we've made a tremendous amount of progress on QRecall 3.0. The main thing slowing it down has just been the massive number of new features and changes. So far, we've got completely re-engineered archive file management to take advantage of APFS features. Record optimization and new multi-threaded record processing for faster database access. And we're updating the code base to get ready for ARM (Apple Silicon) computers. We've added support for capturing APFS volumes, including the ability to capture and restore sparse files. But the huge new feature, and the one taking all of the time, is Stacks. A stack is a remote, incremental, copy of your archive, organized into "slices." Essentially a backup of your backup. But unlike the archive, which is an interactive database, a stack is written in serial, compact, slices suitable for efficient cloud storage. We have basic document stacks working now, and would like to have at least one cloud service ready for the beta. Ultimately, we will offer several different cloud services (Amazon S3, Dropbox, SFTP, ...) to choose from. Still working on the code that allows you to recover/repair an archive from a stack, but most of the stack functionality is already up and running, and we're using it in-house. So the year hasn't been a total loss 
|
 |
|
The principle reason to merge layers is to trim/simplify the history of incremental backups in your archive. This frees up the space occupied by those earlier layers, which becomes available for new captures. So instead of growing forever, your archive becomes a conveyer belt: capturing new changes while letting ancient changes fall of the other end. If you have sufficient disk space, you don't care how big your archive gets, and you don't mind wading through hundreds of layers to find an old version of something, then you don't have to merge layers. The only difference between a "merge" and a "rolling merge" is that the rolling merge automatically selects groups of layers to merge based on a rolling calendar. Other than that, there's no difference. Or said another way, a simple merge will merge a range of layers based on a formula that you give it, while a rolling merge picks sets of layers to merge based on calendar intervals you choose. If you want the traditional "rolling backups" provided by conventional backup solutions, the rolling merge automates that for you. But if you want absolute control on what gets merged and when, then the simple merge action (or doing it manually in the QRecall app, or rolling your own logic using the command-line tool) is at your disposal. Finally, if you have a merge action, see if it has conditions. For example, if you tell the capture assist that you want to keep as much history as possible, it will create a merge action that merges the oldest layers in the archive, but only if the free space on the disk is below a certain threshold. The end result is a merge action that never runs until the disk starts to fill up. It then runs every day until the size of the archive is reduced enough to to stop it running again. I hope that helps!
|
 |
|
Jon, Got your diagnostic report. And yes, QRecall cannot start its privileged helper. You also have a lot of "not found" errors on your archives, but we'll need to straighten out the helper first. This problem is always an installation problem. To gain elevated privileges, QRecall must request that macOS install a "privileged helper" on its behalf. Sometimes (and I'm not always sure why) this doesn't happen or it gets messed up. Once messed up, a new helper can't be installed and QRecall can't un-install itself (because un-installing requires the privileged helper). A true catch-22. The fix is simply to manually uninstall QRecall. This takes a little bit of work, but I'll give you a shortcut. The steps are in the help under QRecall > Help > Guide > Advanced > Uninstall > Uninstall QRecall (the hard way). There's also a copy on the web. As a shortcut, start by performing just steps 3 and 4. (I suspect this is the root of the problem.) After restarting, launch QRecall. It should prompt for you admin credentials and reinstall QRecall. If it doesn't prompt you, make sure QRecall > Preferences > Authorization > "Capture and recall using administrative privileges" setting is turned on. If that doesn't solve the problem, perform the entire un-install procedure (make sure you skip the ? "dagger" steps and step 9, as you don't want to start over when you re-install). Restart again and then launch QRecall. If this still doesn't solve the "can't launch privileged helper" problem, please send another diagnostic report (QRecall > Help > Send Report) and we'll dig deeper.
|
 |
|
What's most likely is that the B1 volume got reformatted/resized/whatever and appears to QRecall as two different volumes. You can confirm this by looking at the timeline (or info sidebar) and see that all of the captures of the first B1 occur before the second B1.
If you want to keep all of this history, simply select both B1 icons and choose Archive > Combine Items. This command takes two volumes, or two owners, that are actually the same volume/owner and stitches them together to form a single item with one time lime.
Alternatively, if you're not interested in that much history, find the older B1 (again, use the info sidebar) and delete it (Archive > Delete Item).
|
 |
|
Working backwards...
Yes, redundancy increases the size of the archive. If you selected the default redundancy level (1:8) then every eight K of data is accompanied by an additional one K of redundant information, making everything you store 12.5% bigger.
There are also meta data, index files, and other database related information in the archive which makes it bigger than the actual file data. But that typically doesn't account for more than a couple of percentage points.
The "insufficient free space" message is the compact action trying not to waste time and endanger your data by moving it around pointlessly. The compact action won't physically compact the archive unless at least 12% of the archive's database file is empty space. This prevents the compact action from moving hundreds of Gigabytes of data, only to recover 4K. You can adjust this threshold in the advanced settings ("Compact Free Space Ratio Minimum"), or you can run the compact from the QRecall menu: open the archive in the browser, then choose Archive > Compact. When run from the menu, it ignores the free space ratio.
But while you have the archive open, make sure you navigate to the root of the archive to make sure you don't have any other owners or volumes that might contain older versions of data. If you do, consider deleting the older owners/volumes (before compacting).
Finally, if you want the smallest possible archive size for files you're keeping for posterity, consider turning on compression in the archive's settings. If you up the compact compression level, then compact the archive, the entire archive will be compressed. (Note it only does this once, and once compressed it won't uncompress or recompress the data). If you still need even more space back, consider reducing the redundancy level.
I hope that helps!
|
 |
|
It's hard to say exactly what was going on, but I wouldn't say this behavior is surprising. When a volume fills up, it becomes harder and harder for the filesystem to locate free clusters needed while writing, and files become increasingly fragmented. If the volume is really really full, this slowdown can be excruciating; like 1,000 or more times slower than normal. In other words, an operation that would normally have finished in a 1/10 of a second could take a minute or more. The first compact after a capture performs garbage collection, which results in rewriting several quanta index files. If this was happening on a nearly full drive, I can imagine abysmal performance. Your clean up of 300 MB was probably enough to save the volume from fragmentation purgatory, and as soon as the compact was stopped it would have deleted its own temporary index files, freeing more space, and probably restoring much of the filesystem's typical performance.
|
 |
|
|
|