QRecall Community Forum
  [Search] Search   [Recent Topics] Recent Topics   [Hottest Topics] Hottest Topics   [Top Downloads] Top Downloads   [Groups] Back to home page 
[Register] Register /  [Login] Login 

QRecall taking up ghost space? RSS feed
Forum Index » Problems and Bugs
Author Message
Ming-Li Wang


Joined: Jan 12, 2015
Messages: 78
Offline
Hi,

Ran into something strange: one of my QRecall archive seems to be hiding disk space. The archive is using 20.18 GB according to Finder, but on the same Finder window you can see a 256GB drive with only two QRecall archives has only 108.71 GB available. (screenshot attached.)

There is no APFS snapshots ("sudo tmutil listlocalsnapshots" came back empty), and it's not indexed by Spotlight.

"df -h" shows:
/dev/disk11s1  238Gi  137Gi  101Gi    58%      87 9223372036854775720    0%   /Volumes/zoo

"du -h -d 1" shows:
16G	./uail.quanta

0B ./.Trashes
288K ./.fseventsd
121G ./docm.quanta
0B ./.TemporaryItems
137G .

As you can see, the size of the other archive (uail.quanta) reported by du matches the number reported by Finder. The archive in question (docm.quanta), however, is taking up 121G, about 6 times as much as reported by Finder.

Verification shows no error. Compact would release the hidden space, on top of what "compact" would have saved normally.

I have been watching this for a while. Compacted it a few times, but the size would grow above 100G in a matter of days.

The archive backs up my ~/Documents folder, so it does capture often.

My system is macos 10.14.4, and my QRecall version is 2.1.14(6). I have another 5 QRecall archives on another drive. None of them have the issue.

A report has been sent.

Thanks!

[edit] the screenshot was of poor quality after shrinking. This one should be better.
  • [Thumb - 2.jpg]
 Filename 2.jpg [Disk] Download
 Description No description given
 Filesize 109 Kbytes
 Downloaded:  5014 time(s)

James Bucanek


Joined: Feb 14, 2007
Messages: 1568
Offline
Thanks for sending a diagnostic report.

QRecall agrees with the Finder that your archive is approximately 20GB in size. So the mystery is why du thinks there's an extra 100GB of data in there.

The next step would be to examine the details of the archive package. In the terminal start with ls -lhan /Volumes/zoo/docm.quanta and look at what's inside the package. It should look something like this:

drwxr-xr-x@ 14 501  20   448B Apr 25 17:10 .
drwxrwxr-x 8 501 20 256B Apr 16 22:34 ..
-rw-r--r-- 1 501 20 264B Apr 24 11:31 displayname.index
-rw-r--r-- 1 501 20 1.6M Apr 24 11:31 filename.index
-rw-r--r-- 1 501 20 354K Apr 24 11:31 fill.index
-rw-r--r-- 1 501 20 3.0G Apr 24 12:29 hash.index
-rw-r--r-- 1 501 20 57K Apr 24 11:15 layer.index
-rw-r--r-- 1 501 20 16M Apr 25 08:52 negative.index
-rw-r--r-- 1 501 20 722B Apr 24 11:15 outline.index
-rw-r--r-- 1 501 20 456M Apr 25 00:20 package.index
-rw-r--r-- 1 501 20 2.4T Apr 25 17:10 repository.data
-rw-r--r-- 1 501 20 122B Apr 24 11:31 sequence.index
-rw-r--r--@ 1 501 20 1.8K Apr 24 01:10 settings.plist
-rw-r--r-- 1 501 20 1.0K Apr 25 17:10 status.plist


The largest file should be repository.data; that's where all of your data is. At a distant second is the hash.index, with the rest of the files being a tiny fraction of those sizes.

- QRecall Development -
[Email]
Ming-Li Wang


Joined: Jan 12, 2015
Messages: 78
Offline
The result of "ls -lhan /Volumes/zoo/docm.quanta":
total 279009064

drwxr-xr-x@ 22 501 80 704B May 3 09:49 .
drwxrwxr-x@ 9 0 80 288B May 1 08:57 ..
-rw-r--r-- 1 501 80 19K May 3 09:49 displayname.index
-rw-r--r-- 1 501 80 7.2M May 3 09:49 filename.index
-rw-r--r-- 1 501 80 98K May 3 09:49 fill.index
-rw-r--r-- 1 501 80 96M May 3 09:49 hash.index
-rw-r--r-- 1 501 80 112K May 3 01:29 hash_adjunct.index
-rw-r--r-- 1 501 80 599K May 3 09:49 layer.index
-rw-r--r-- 1 501 80 16M May 3 09:49 negative.index
-rw-r--r-- 1 501 80 720B May 3 01:29 outline.index
-rw-r--r-- 1 501 80 8.0M May 3 09:49 package.index
-rw-r--r-- 1 501 80 16K May 3 09:49 package_adjunct.index
-rw-r--r-- 1 501 80 17G May 3 09:49 repository.data
-rw-r--r-- 1 501 80 8.3M May 3 09:49 repository_8k.checksum32
-rw-r--r-- 1 501 80 1.0G May 3 09:49 repository_p8w8k16m2.0.anvin_reed_sol
-rw-r--r-- 1 501 80 531K May 3 09:49 repository_p8w8k16m2.0_8k.checksum32
-rw-r--r-- 1 501 80 1.0G May 3 09:49 repository_p8w8k16m2.1.anvin_reed_sol
-rw-r--r-- 1 501 80 531K May 3 09:49 repository_p8w8k16m2.1_8k.checksum32
-rw-r--r-- 1 501 80 122B May 3 09:49 sequence.index
-rw-r--r-- 1 501 80 4.6K Apr 27 11:53 settings.plist
-rw-r--r-- 1 501 80 866B May 3 09:49 status.plist
-rw-r--r-- 1 501 80 5.4K Apr 18 2018 view.plist

Looks normal to me.

After one night's sleep, however, Finder now says the drive has only 95.21 GB available, 13.5G less than last night. "df" is reporting 150 Gi used, 13 Gi more than last night. And "du" is reporting 133G for "docm.quanta".

Don't know if it'll be useful, but I've sent in another report anyway, so that you can see what has been done overnight.

[edit] corrected a typo.
James Bucanek


Joined: Feb 14, 2007
Messages: 1568
Offline
I'm mystified. If you add up the sizes of those files it's clearly close to 20GB, and certainly not 150GB.

Have you tried repairing the volume?

- QRecall Development -
[Email]
Ming-Li Wang


Joined: Jan 12, 2015
Messages: 78
Offline
James Bucanek wrote:I'm mystified. If you add up the sizes of those files it's clearly close to 20GB, and certainly not 150GB.

150GB was reported by "df", which includes the size of the other archive "uail.quanta" on the same volume.

James Bucanek wrote:Have you tried repairing the volume?

Good idea. Did it just now. And here is the terminal output.
Repairing file system.

Volume was successfully unmounted.
Performing fsck_apfs -y -x /dev/rdisk13s1
Checking the container superblock.
Checking the space manager.
Checking the space manager free queue trees.
Checking the object map.
Checking volume.
Checking the APFS volume superblock.
The volume zoo was formatted by newfs_apfs (748.31.8) and last modified by apfs_kext (945.250.134).
Checking the object map.
Checking the snapshot metadata tree.
Checking the snapshot metadata.
Checking snapshot 1 of 2.
error: directory valence check: directory (oid 0x3): nchildren (2) does not match drec count (0)
warning: snapshot fsroot tree corruptions are not repaired; they'll go away once the snapshot is deleted
Checking snapshot 2 of 2.
error: directory valence check: directory (oid 0x3): nchildren (2) does not match drec count (0)
Checking the extent ref tree.
Checking the fsroot tree.
error: directory valence check: directory (oid 0x3): nchildren (2) does not match drec count (0)
Verifying allocated space.
Performing deferred repairs.
error: nchildren of inode object (id 3) does not match expected value
Restarting after deferred repairs...
Checking the space manager.
Checking the space manager free queue trees.
Checking the object map.
Checking volume.
Checking the APFS volume superblock.
The volume zoo was formatted by newfs_apfs (748.31.8) and last modified by apfs_kext (945.250.134).
Checking the object map.
Checking the snapshot metadata tree.
Checking the snapshot metadata.
Checking snapshot 1 of 2.
error: directory valence check: directory (oid 0x3): nchildren (2) does not match drec count (0)
Checking snapshot 2 of 2.
error: directory valence check: directory (oid 0x3): nchildren (2) does not match drec count (0)
Checking the extent ref tree.
Checking the fsroot tree.
Verifying allocated space.
The volume /dev/rdisk13s1 appears to be OK.

Operation successful.

The two snapshots were taken late last night and early this morning respectively. Per habit I take a system snapshot of the boot volume (with "sudo tmutil snapshot /" in terminal) before installing new software that I deem suspect or likely to be removed right away. There were indeed no snapshot on the volume when I started the thread last night. Because I didn't know tmutil would take a snapshot on all volumes (not just the boot/system volume), My follow-up message this morning didn't mention it either. My apologies.

There was, however, a third "nchildren (2) does not match drec count (0)" error not associated with snapshots. Diskutil repaired it. A second check after removing those two snapshots show no more errors.

Still, Finder (after restart) is reporting 90.91 GB available space, 4+GB less than 4 hours ago. "df" is reporting "153Gi" used (3Gi more than this morning), and "du" says docm.quanta is taking up 137G (2G more than this morning).

Yes, I'm mystified, too, especially when the other archive on the same volume seems to be unaffected. That one receives only daily update, though. So, I've just changed its capture schedule, now same with docm.quanta (3 min. after item change, with 21 min. hiatus after each capture). The source of the other archive is also busy for it includes my home (~) folder, so there will be a lot of actions. I'll report back later this evening.
Ming-Li Wang


Joined: Jan 12, 2015
Messages: 78
Offline
Less than 5 hours into my experiment, both archives have put up substantial weight:
44G	./uail.quanta

0B ./.Trashes
320K ./.fseventsd
143G ./docm.quanta
0B ./.TemporaryItems
187G .

Apparently the issue is not archive specific.

Other than compacting both archives, I decided to try something different this time. I moved both of them to another drive (drive B), reformatted the original drive (drive A), and then move them back.

As soon as the first "move" action completed, drive A got all its missing space back according to Finder, concurred by df, and du. On drive B, "du" reported normal size for both:
15G	./uail.quanta

19G ./docm.quanta
34G .

As expected, they are still normal after being moved back to drive A.

Another report has been sent.
Adrian Chapman


Joined: Aug 16, 2010
Messages: 72
Offline
I don't know if this is relevant but there are issues with the way Finder reports disk space on APFS volumes. There is quite a lot about it here

https://macintouch.com/community/index.php?threads/apfs-file-systems.1489/page-4
James Bucanek


Joined: Feb 14, 2007
Messages: 1568
Offline
It's not relevant to this issue.

The thread in the link seems to be delving into the conundrums introduced by file cloning and snapshots in APFS, which lead to strange things like duplicating a large file but still having the same amount of free space. (update: thread also discusses the difference between the free space as reported by the Finder and the free space as reported by Disk Utility / df, that might be related to this issue, but I'm more concerned about du.)

QRecall 2.x does not take advantage of file cloning, sparse files, or snapshots (on the archive's volume), so these shouldn't be an issue. (QRecall 3.x will leverage file cloning and sparse files, something I'm working on right now.)

No, the problem is that du is reporting more data in a archive's package folder than the total of the files in that folder. Finder is reporting the correct size, and it agrees with QRecall and the total of the POSIX sizes reported for each file. So I'm still not sure what's going on. (I've used du to check the size of my test archives on APFS volumes and they all report the correct size.)

I will say that I think APFS is still a little green. In the past three months I've had two APFS volumes became un-repairable (one of those was my boot volume) that had to be erased and reformatted. Luckily, I had backups.

- QRecall Development -
[Email]
James Bucanek


Joined: Feb 14, 2007
Messages: 1568
Offline
Because of the issues I've encountered with APFS volumes getting corrupted, and since you mentioned that you have the space available to move the archives to a different volume, I'd suggest copying the archive to another volume, repartition and reformat the APFS volume, then move the archives back.

Also, you mentioned that "I take a system snapshot of the boot volume", but we're talking about the volume the archives are on, right? There shouldn't be any snapshots of 'zoo'. (It doesn't make any sense to make a snapshot of an archive volume, since archives are literally a collection of snapshots/layers.) If there are, that could be the problem?at least the root of the free space problem.

- QRecall Development -
[Email]
Ming-Li Wang


Joined: Jan 12, 2015
Messages: 78
Offline
OK, what I did a few hours earlier doesn't seem to help as docm.quanta is still putting up weight. (I changed uial.quanta's schedule back to one capture per day, so it's fine.)

Given your conversation above, I'll try something new tonight. I moved the two archives away, again, reformatted the drive into HFS+, and moved the archives back. We should find out if APFS is the culprit tomorrow morning. (It's 1am local time, and I'm going to bed.)
Ming-Li Wang


Joined: Jan 12, 2015
Messages: 78
Offline
James Bucanek wrote:Because of the issues I've encountered with APFS volumes getting corrupted, and since you mentioned that you have the space available to move the archives to a different volume, I'd suggest copying the archive to another volume, repartition and reformat the APFS volume, then move the archives back.

It's what I did earlier this evening, and it didn't help as said in the previous post (before I saw your latest post).

James Bucanek wrote:Also, you mentioned that "I take a system snapshot of the boot volume", but we're talking about the volume the archives are on, right? There shouldn't be any snapshots of 'zoo'. (It doesn't make any sense to make a snapshot of an archive volume, since archives are literally a collection of snapshots/layers.) If there are, that could be the problem?at least the root of the free space problem.

Yes, I meant only to take a snapshot of the boot volume (with "sudo tmutil snapshot /"). I didn't know tmutil actually took a snapshot for each active volume. (Well, I didn't check them all, but I did check a couple and they all had a snapshot of the same timestamp.)

That being said, I don't think it's useless to take snapshot of a volume dedicated for backups, as system snapshot and cloud backup are the only two means I know that can withstand a ransomware attack.
James Bucanek


Joined: Feb 14, 2007
Messages: 1568
Offline
The problem with taking snapshots of an archive volume is there are index files in a QRecall archive that get completely rewritten every time a capture or merge is performed. The data is mostly unchanged, but the filesystem doesn't know that.

If there are snapshots, the copies of these index files will consume a fare amount of space. Add to that the changes being made to the other files, and it starts to add up. I'm not worried about Time Machine snapshots because macOS is smart enough to discard them if you start to run out of disk space. But it will cause a discrepancy between what you think should be the free space and the actual free space.

So if this is the discrepancy, it might be something you can just ignore.

- QRecall Development -
[Email]
Ming-Li Wang


Joined: Jan 12, 2015
Messages: 78
Offline
James Bucanek wrote:So if this is the discrepancy, it might be something you can just ignore.

It's not. I thought I have made that clear. I was only arguing that it's not useless to take snapshots of a volume dedicated to hosting QRecall archives.

I'm going out in a min., will report my findings after I'm back.
Ming-Li Wang


Joined: Jan 12, 2015
Messages: 78
Offline
10 hours have passed since the drive hosting those archives have been reformatted into HFS+, and there is no sign of either archive taking up any ghost space (checked in Finder and with df & du, as usual). It's now pretty clear it's APFS-related (& most likely an APFS bug). Thanks to Adrian for bringing the possibility to my attention, though I have no clue if/how it's related to the issues mentioned in the discussion thread you provided.

I had decided to leave the volume as is, but an itch made me give APFS one final chance. This time, I erased the whole drive (yesterday I merely reformatted the APFS volume without touching the container/partition). I'll report back if I find something interesting.
 
Forum Index » Problems and Bugs
Go to:   
Mobile view
Powered by JForum 2.8.2 © 2022 JForum Team • Maintained by Andowson Chang and Ulf Dittmer