Message |
|
Thanks for the detailed explanation. I don't have Google Drive "installed" on my system, as I use it only for backup purpose. Those cloud storage drivers are notoriously review hungry and tend to make the system less stable, and I've tried to stay away. My current cloud backup solution is Arq, which manage cloud connections by itself, and I hope someday QRecall can do the same. Take your time, though. No rush. Thanks for the excellent software, as always.
|
 |
|
James Bucanek wrote:
Steven J wrote:1. Any plans to allow iCloud to be an stack destination? i.e., create an stack container on iCloud Drive ?
That should be possible within the next couple of weeks.
Reading this gives me hope that maybe Google Drive or OneDrive will be supported someday? For students and people working in an educational institute, the price of Google Drive or OneDrive can't be beat--they are free, with ample space. I have terabytes backed up on Google Drive, and will never switch to iCloud or S3 for that purpose. I have been running v3 betas on my system for a few months without issues, but have yet to try the "stacks" feature, mainly because it seems to be something useful only for backing up to the cloud. Am I right? Or is it something worth doing even when backing up locally? Thanks!
|
 |
|
Doh! How come I didn't think of making a new action. Silly me. I thought what I did in step 5 above (re-selecting the backup source and the target archive) would be the same. Apparently it isn't. After removing the old action and made a new one, RB3 does capture following file changes. Thank you! I might have run into another bug, though, while making the new action: the details of the schedule--specifically the length of the delay and the length RB should ignore new events after a capture run--don't always stick. E.g., the last thing I did before sending in the diagnostic report was to change the delay from 3 min. to 5 min. and to ask RB to ignore new events for 20 min. The second change was kept while the first one was ignored. The delay remains 3 min. One minor suggestion, if I may: please consider adding a set of "Save/Cancel" buttons to the action settings dialog. Having to close the window before being offered an opportunity to save the settings feel odd. Thanks
|
 |
|
Hi, I decided to start test driving QRecall 3.0 beta yesterday. All seemed to be fine until I realized just now that one of my archives using event-based capturing schedule--to run when "capture items change" with a 3 min. delay to be exact--had not run since QR 3 beta was installed. What I have tried so far: 1. A manual run from the action window that went smoothly--no difference. 2. Reboot my system--no difference. 3. Tweaked the scheduling details a little bit (from 3 min. delay to 5 min. delay) -- still no go. 4. Tweaked the scheduling details some more, this time removing the "ignore new events for 20 min." restriction -- still no difference. 5. Tweaked the scheduling details again by re-selecting the target archive and the "item to capture" and changing the delay back to 3min -- still no difference. After each step above, some files on the backup source (the drive selected as the "item to capture") were changed to make sure new file change events were triggered. One of my cloud sync tools (Syncovery) also works on file-change events-based scheduling, and it sync'ed all the changes dutifully. More details about my system: I'm running QR 3.0 beta v.74 on a hackingtosh desktop on freshly installed (not upgraded) Mac OS 12.1. QR version 2.2 was installed first before upgrading to 3.0 B74. QR2 has been working on the system for years. With QR2 and now QR3 beta I have two active archives. The other one is on time-based capturing schedule (every 6 hours) and is working uneventfully. Thanks for listening.
|
 |
|
10 hours have passed since the drive hosting those archives have been reformatted into HFS+, and there is no sign of either archive taking up any ghost space (checked in Finder and with df & du, as usual). It's now pretty clear it's APFS-related (& most likely an APFS bug). Thanks to Adrian for bringing the possibility to my attention, though I have no clue if/how it's related to the issues mentioned in the discussion thread you provided. I had decided to leave the volume as is, but an itch made me give APFS one final chance. This time, I erased the whole drive (yesterday I merely reformatted the APFS volume without touching the container/partition). I'll report back if I find something interesting.
|
 |
|
James Bucanek wrote:So if this is the discrepancy, it might be something you can just ignore.
It's not. I thought I have made that clear. I was only arguing that it's not useless to take snapshots of a volume dedicated to hosting QRecall archives. I'm going out in a min., will report my findings after I'm back.
|
 |
|
James Bucanek wrote:Because of the issues I've encountered with APFS volumes getting corrupted, and since you mentioned that you have the space available to move the archives to a different volume, I'd suggest copying the archive to another volume, repartition and reformat the APFS volume, then move the archives back.
It's what I did earlier this evening, and it didn't help as said in the previous post (before I saw your latest post).
James Bucanek wrote:Also, you mentioned that "I take a system snapshot of the boot volume", but we're talking about the volume the archives are on, right? There shouldn't be any snapshots of 'zoo'. (It doesn't make any sense to make a snapshot of an archive volume, since archives are literally a collection of snapshots/layers.) If there are, that could be the problem?at least the root of the free space problem.
Yes, I meant only to take a snapshot of the boot volume (with "sudo tmutil snapshot /"). I didn't know tmutil actually took a snapshot for each active volume. (Well, I didn't check them all, but I did check a couple and they all had a snapshot of the same timestamp.) That being said, I don't think it's useless to take snapshot of a volume dedicated for backups, as system snapshot and cloud backup are the only two means I know that can withstand a ransomware attack.
|
 |
|
OK, what I did a few hours earlier doesn't seem to help as docm.quanta is still putting up weight. (I changed uial.quanta's schedule back to one capture per day, so it's fine.) Given your conversation above, I'll try something new tonight. I moved the two archives away, again, reformatted the drive into HFS+, and moved the archives back. We should find out if APFS is the culprit tomorrow morning. (It's 1am local time, and I'm going to bed.)
|
 |
|
Less than 5 hours into my experiment, both archives have put up substantial weight:
44G ./uail.quanta
0B ./.Trashes
320K ./.fseventsd
143G ./docm.quanta
0B ./.TemporaryItems
187G . Apparently the issue is not archive specific. Other than compacting both archives, I decided to try something different this time. I moved both of them to another drive (drive B), reformatted the original drive (drive A), and then move them back. As soon as the first "move" action completed, drive A got all its missing space back according to Finder, concurred by df, and du. On drive B, "du" reported normal size for both:
15G ./uail.quanta
19G ./docm.quanta
34G . As expected, they are still normal after being moved back to drive A. Another report has been sent.
|
 |
|
James Bucanek wrote:I'm mystified. If you add up the sizes of those files it's clearly close to 20GB, and certainly not 150GB.
150GB was reported by "df", which includes the size of the other archive "uail.quanta" on the same volume.
James Bucanek wrote:Have you tried repairing the volume?
Good idea. Did it just now. And here is the terminal output.
Repairing file system.
Volume was successfully unmounted.
Performing fsck_apfs -y -x /dev/rdisk13s1
Checking the container superblock.
Checking the space manager.
Checking the space manager free queue trees.
Checking the object map.
Checking volume.
Checking the APFS volume superblock.
The volume zoo was formatted by newfs_apfs (748.31.8) and last modified by apfs_kext (945.250.134).
Checking the object map.
Checking the snapshot metadata tree.
Checking the snapshot metadata.
Checking snapshot 1 of 2.
error: directory valence check: directory (oid 0x3): nchildren (2) does not match drec count (0)
warning: snapshot fsroot tree corruptions are not repaired; they'll go away once the snapshot is deleted
Checking snapshot 2 of 2.
error: directory valence check: directory (oid 0x3): nchildren (2) does not match drec count (0)
Checking the extent ref tree.
Checking the fsroot tree.
error: directory valence check: directory (oid 0x3): nchildren (2) does not match drec count (0)
Verifying allocated space.
Performing deferred repairs.
error: nchildren of inode object (id 3) does not match expected value
Restarting after deferred repairs...
Checking the space manager.
Checking the space manager free queue trees.
Checking the object map.
Checking volume.
Checking the APFS volume superblock.
The volume zoo was formatted by newfs_apfs (748.31.8) and last modified by apfs_kext (945.250.134).
Checking the object map.
Checking the snapshot metadata tree.
Checking the snapshot metadata.
Checking snapshot 1 of 2.
error: directory valence check: directory (oid 0x3): nchildren (2) does not match drec count (0)
Checking snapshot 2 of 2.
error: directory valence check: directory (oid 0x3): nchildren (2) does not match drec count (0)
Checking the extent ref tree.
Checking the fsroot tree.
Verifying allocated space.
The volume /dev/rdisk13s1 appears to be OK.
Operation successful. The two snapshots were taken late last night and early this morning respectively. Per habit I take a system snapshot of the boot volume (with "sudo tmutil snapshot /" in terminal) before installing new software that I deem suspect or likely to be removed right away. There were indeed no snapshot on the volume when I started the thread last night. Because I didn't know tmutil would take a snapshot on all volumes (not just the boot/system volume), My follow-up message this morning didn't mention it either. My apologies. There was, however, a third "nchildren (2) does not match drec count (0)" error not associated with snapshots. Diskutil repaired it. A second check after removing those two snapshots show no more errors. Still, Finder (after restart) is reporting 90.91 GB available space, 4+GB less than 4 hours ago. "df" is reporting "153Gi" used (3Gi more than this morning), and "du" says docm.quanta is taking up 137G (2G more than this morning). Yes, I'm mystified, too, especially when the other archive on the same volume seems to be unaffected. That one receives only daily update, though. So, I've just changed its capture schedule, now same with docm.quanta (3 min. after item change, with 21 min. hiatus after each capture). The source of the other archive is also busy for it includes my home (~) folder, so there will be a lot of actions. I'll report back later this evening.
|
 |
|
The result of "ls -lhan /Volumes/zoo/docm.quanta":
total 279009064
drwxr-xr-x@ 22 501 80 704B May 3 09:49 .
drwxrwxr-x@ 9 0 80 288B May 1 08:57 ..
-rw-r--r-- 1 501 80 19K May 3 09:49 displayname.index
-rw-r--r-- 1 501 80 7.2M May 3 09:49 filename.index
-rw-r--r-- 1 501 80 98K May 3 09:49 fill.index
-rw-r--r-- 1 501 80 96M May 3 09:49 hash.index
-rw-r--r-- 1 501 80 112K May 3 01:29 hash_adjunct.index
-rw-r--r-- 1 501 80 599K May 3 09:49 layer.index
-rw-r--r-- 1 501 80 16M May 3 09:49 negative.index
-rw-r--r-- 1 501 80 720B May 3 01:29 outline.index
-rw-r--r-- 1 501 80 8.0M May 3 09:49 package.index
-rw-r--r-- 1 501 80 16K May 3 09:49 package_adjunct.index
-rw-r--r-- 1 501 80 17G May 3 09:49 repository.data
-rw-r--r-- 1 501 80 8.3M May 3 09:49 repository_8k.checksum32
-rw-r--r-- 1 501 80 1.0G May 3 09:49 repository_p8w8k16m2.0.anvin_reed_sol
-rw-r--r-- 1 501 80 531K May 3 09:49 repository_p8w8k16m2.0_8k.checksum32
-rw-r--r-- 1 501 80 1.0G May 3 09:49 repository_p8w8k16m2.1.anvin_reed_sol
-rw-r--r-- 1 501 80 531K May 3 09:49 repository_p8w8k16m2.1_8k.checksum32
-rw-r--r-- 1 501 80 122B May 3 09:49 sequence.index
-rw-r--r-- 1 501 80 4.6K Apr 27 11:53 settings.plist
-rw-r--r-- 1 501 80 866B May 3 09:49 status.plist
-rw-r--r-- 1 501 80 5.4K Apr 18 2018 view.plist Looks normal to me. After one night's sleep, however, Finder now says the drive has only 95.21 GB available, 13.5G less than last night. "df" is reporting 150 Gi used, 13 Gi more than last night. And "du" is reporting 133G for "docm.quanta". Don't know if it'll be useful, but I've sent in another report anyway, so that you can see what has been done overnight. [edit] corrected a typo.
|
 |
|
Hi, Ran into something strange: one of my QRecall archive seems to be hiding disk space. The archive is using 20.18 GB according to Finder, but on the same Finder window you can see a 256GB drive with only two QRecall archives has only 108.71 GB available. (screenshot attached.) There is no APFS snapshots ("sudo tmutil listlocalsnapshots" came back empty), and it's not indexed by Spotlight. "df -h" shows:
/dev/disk11s1 238Gi 137Gi 101Gi 58% 87 9223372036854775720 0% /Volumes/zoo "du -h -d 1" shows:
16G ./uail.quanta
0B ./.Trashes
288K ./.fseventsd
121G ./docm.quanta
0B ./.TemporaryItems
137G .
As you can see, the size of the other archive (uail.quanta) reported by du matches the number reported by Finder. The archive in question (docm.quanta), however, is taking up 121G, about 6 times as much as reported by Finder. Verification shows no error. Compact would release the hidden space, on top of what "compact" would have saved normally. I have been watching this for a while. Compacted it a few times, but the size would grow above 100G in a matter of days. The archive backs up my ~/Documents folder, so it does capture often. My system is macos 10.14.4, and my QRecall version is 2.1.14(6). I have another 5 QRecall archives on another drive. None of them have the issue. A report has been sent. Thanks! [edit] the screenshot was of poor quality after shrinking. This one should be better.
|
 |
|
Very happy to see v2.1 in beta. I'm you'll beta 31 on macos 10.13.4. First bug to report: QRecall sometimes has troubles deleting items from an archive, taking a long time waiting for the archive to close. (screenshot 1) The dialog would say "complete" if I wait long enough, but it won't go away, and the "Stop" button would be grayed out and unclickable. (screenshot 2) When that happens, QR has to be force-quit. I've run into the problem numerous times with two different archives, so it doesn't seem to be archive-specific. I've deleted items from both archives successfully as well, so it seems random to me so far. A report has been sent. edit: the forum is acting funny and the 2nd attachment is shown on top.
|
 |
|
Hi James, Congratulations on the v2 release. QRecall has been one of my favorite tools since I discovered it a little more than a year ago, and now it's the first program I install when setting up a new system. Having followed through the v2 beta process, moreover, I must say I admire your work ethic, especially the methodical way you tackle bugs. No software is perfect and I'm sure to bring you more demands later; I believe you have your plan as well. But QR2 is truly a gem and the best backup tool for Mac (and I've tried many). Thank you and, again, congratulations!
|
 |
|
Doh! False alarm. Finally got it. Not sure since when, but QR has created another new volume for the partition. I always combine them when I see that, but was unaware of this one. Turned out I was looking at the wrong volume. My mistake; my sincere apologies.
|
 |
|
|
|