Message |
|
Erik, QRecall works at the filesystem level. So-called "smart" folders are an abstraction created by the Finder. The Finder stores the metadata needed to find and display the contents of its "smart" folder in a file, and that's what QRecall captures. Unfortunately, QRecall cannot look into the mind of the Finder to find out what those files are, anymore than it could look into iTunes to capture the audio files in a playlist. I've considered adding the ability to capture items based on some search criteria, but there are so many problems with this idea I just keep putting it off.
|
|
|
Bernd, QRecall 3.0 (the next major release) will have new features that address cascading, cloud, and off-site backups. It also has a boatload of performance and reliability improvements. We have no firm estimate of when it will be available, but look for beta tests to start sometime this fall.
|
|
|
Johannes, Thanks for the confirmation! Look for a new release of QRecall that works around this issue.
|
|
|
What appears to be happening is that the repository.data file?the file with all the important data?is taking up more physical (allocated) space than it actually contains. In the case of Archiv 1, the repository.data file contained 40GB of data, but occupied 79GB of disk space. Stunningly, Archiv 2 contained only 0.4GB of data, but occupied 151GB of disk space. While adding sparse file support to QRecall 3.0, I've noticed that APFS can sometime over allocate space for a file and that is what seems to be happening here. My working theory is that APFS is not correctly handing pre-allocation requests. As the repository.data file grows during a capture, QRecall periodically makes pre-allocation requests at the end of the file so if the disk suddenly runs out of space, QRecall has enough "head room" to write its wrap-up metadata and session records. To test this theory, I've built a pre-release version of QRecall 2.1.16(1) that you can download and install. This hacked version doesn't perform any preallocation of the repository.data file. It won't fix the over-allocated space you have now, but if you compact the archive and perform new captures, those new captures won't cause it to over-allocate the file?assuming my theory is correct. Give this version and try and please keep me posted.
|
|
|
Johannes, You are the third user that has reported something like this. And I'm beginning to suspect it's a bug in APFS. It also makes no sense to me that compacting the archive would make any difference. I would be interested in getting some allocation information about the archive files by running the 'ls -lskn' Terminal command on the archive (particularly at a point in time when the archive size and Finder size disagree), like this:
ls -lskn /Volumes/YourVolume/Path/To/Archive.quanta Secondly, I'd be interested to know if the disk repair tool in Disk Utility produces any anomalous output when you repair that volume. Finally, I'd be curious to know if there are any snapshots of that volume. You can find out by issuing the command:
tmutil listlocalsnapshots /Volumes/YourVolume
|
|
|
When you use the top shade to hide earlier layers, the item browser can't use the pre-calculated size hints that are normally available to it. To determine the size of a folder or package, the browser view must read all of the items it contains. Since this can, potentially, mean millions of items it doesn't do that automatically. If you "drill down" into that folder, and every subfolder it contains, when you return back to the top level folder you'll see the calculated size. This is easiest to do in list view.
|
|
|
Steven J Gold wrote: is there any way of looking at a specific layer/backup to see what files were captured, especially if there are any large ones?
Yes, there is. Every layer is a delta, recording/adding only those changes that have occurred since the last layer. Open the archive and drag the top and bottom layer shades to isolate a single layer. By hiding all of the changes that occurred before and after that layer, the item browser will show just those items that were captured in that layer. (If what you're looking for isn't obvious, use the View menu to show invisible items and package contents.) You can also also use the shades to isolate a group of layers, showing you all of the items captured last week, for example.
|
|
|
Mike, Thanks for sending the diagnostic report. It would appear that you're running into an issue (read "bug") that was addressed in QRecall 2.1.14. I would suggest you start by updating to the latest QRecall (in the QRecall app choose QRecall > Check for Updates…). Once updated, verify the archive again. If you encounter errors, repair once more. I suggest choosing the default option to reconstruct your redundant data. If the repair still fails, or fails to verify afterwards, please send another diagnostic report and we'll investigate further.
|
|
|
Mike, A repaired archive should verify, so that's not right. Start by sending a diagnostic report. In the QRecall application choose Help > Send Report…. We'll review your logs and see what we can find.
|
|
|
There are actually three archive size limits. QRecall's internal limit is now 6TB. It was 2TB—and still is if you happen to be running one of the (very old) 32-bit versions of QRecall/OS X. The second limit is the maximum file size supported by the archive's volume. This varies from one format to another, but if there is a limit it's typically either 2TB or 4TB. For example, a USB volume factory formatted for MS-DOS (AKA FAT32) can't store files larger then 4TB. Finally, QRecall tries to prevent the archive from growing until it fills up the entire volume. If you're hitting the 2TB limit, you need to upgrade your OS. If you're hitting the 6TB limit, splitting your archive is a good solution (and will probably improve performance). If you're hitting the file size limit, consider reformatting the volume to a format the supports larger files or split your archive. If your volume is full, you probably need a second/larger volume. There's no way to split an archive directly. (This has been a requested feature, but is still on the wish list.) However, it's pretty easy to do manually.
Decide on a division of content. In my personal system, I capture my iTunes folder (which is pretty huge) to a second archive, and everything else to my primary archive.
Duplicate your archive.
In one archive, delete everything you plan to capture in the second archive.
In the second archive, delete everything you'll be capturing in the first archive.
Compact both archives.
In the first (say the "everything else") archive's settings, exclude the item(s) you plan on capturing to the second archive.
Continue capturing everything to the first archive (the exclusion rule will omit the items you plan to capture to the second archive).
Create a new capture action to the capture the excluded item(s) to the second archive.
Make a copy of all of the maintenance actions for the first archive (merge, compact, repair, ...), and change the archive so those same actions are performed regularly on the second archive. It's a good idea to suspend your scheduled actions while you're performing this kind of surgery.
|
|
|
Sorry, but no. QRecall captures and restores volumes.
|
|
|
Darryl, QRecall will certainly reduce the data to its minimum amount. And don't forget to turn on compression too! Creating a disk image of the archive won't provide many benefits and will actually make the archive a little larger. ---If, on the other hand, you want to create the absolute smallest possible file to upload consider creating your archive with compression off and then create a compressed disk image (disk image compression is more aggressive than QRecall's) Warning: that is going to take a long time and need double the space of the archive.--- The biggest problem I see is reliably uploading a single, gigantic, file to the server in one shot. 2TB of data, even if you have a 100Mb/sec internet link, it going to take a couple of days to upload. If Google Drive's software will handle partial transfers, failures, and restarts, etc., then use that. Otherwise, I'd look for software that can deal with interruptions or slicing up the archive (or the image of the archive) into smaller pieces and upload those. Finally, if you go the disk image route, consider turning on a modest amount of data redundancy (say 1:8) when you create the archive. That much data over a WAN is at risk of dropping a few bits here and there. Downside, it will make the whole archive 12% bigger.
|
|
|
Ernest, QRecall does not read or attempt to capture the EFI partition of a volume. In macOS, the EFI partition really isn't used (for booting at least), and isn't modified in a way that would require it be preserved or restored. If you repartition the device using Apple tools, a new (empty) EFI partition will be created.
|
|
|
The problem with taking snapshots of an archive volume is there are index files in a QRecall archive that get completely rewritten every time a capture or merge is performed. The data is mostly unchanged, but the filesystem doesn't know that. If there are snapshots, the copies of these index files will consume a fare amount of space. Add to that the changes being made to the other files, and it starts to add up. I'm not worried about Time Machine snapshots because macOS is smart enough to discard them if you start to run out of disk space. But it will cause a discrepancy between what you think should be the free space and the actual free space. So if this is the discrepancy, it might be something you can just ignore.
|
|
|
Because of the issues I've encountered with APFS volumes getting corrupted, and since you mentioned that you have the space available to move the archives to a different volume, I'd suggest copying the archive to another volume, repartition and reformat the APFS volume, then move the archives back. Also, you mentioned that "I take a system snapshot of the boot volume", but we're talking about the volume the archives are on, right? There shouldn't be any snapshots of 'zoo'. (It doesn't make any sense to make a snapshot of an archive volume, since archives are literally a collection of snapshots/layers.) If there are, that could be the problem?at least the root of the free space problem.
|
|
|
|
|