Message |
|
Steven, I'm afraid you'll have to ignore this one. The com.apple.TCC file contains sensitive information that Apple doesn't want malware to have access to. QRecall works around most file restrictions with special privileges, but the com.apple.TCC directory seems to be different. Also note that it's very inconsistent; some users report problems reading this file, while most systems capture it every day just fine. The folder contains a database that stores, among other things, the security preferences and allowances defined for various apps. Obviously, macOS is super paranoid about letting a bad actor modify (or even read) this information. The dvp database is similarly protected. I would suggest adding an exclusion to your archive that skips these items. Should you need to perform a complete restore of your start-up volume, it just means you'll need to re-grant apps access to your private information (contacts, external volumes, etc). Which is probably a good thing to review from time to time anyway.
|
|
|
That is correct. When you change identity keys, you are a new "owner" so all of your items will be captured for the first time. Of course, since the archive you're using has already captured these items, most (if not all) of the data is duplicate. What's taking time is reading each block of every file, looking that up that block in the archive's database, and comparing it to make sure it's a duplicate. That's a lot of work. If you're not concerned with the (short) history of changes you already have, then it won't hurt to simply wipe the archive and start over: choose File > New Archive and overwrite your existing archive with an empty one. Then restart the capture. If you do want to keep your history, let the capture finish (or stop the one in progress). Open the archive, navigate to the root, choose both owners, and choose Archive > Combine Items.... This will change the owner of all of your previously captured items so they now belong to your new owner (identity). From there on out, it will be as if you had started with your permanent identity key from the beginning. I hope that helps.
|
|
|
That's the "next action" from the scheduler. You could try restarting the scheduler, but it should (eventually) just go away.
|
|
|
Is it an action that's running? (Or one that has stopped with an error?) The activity window only shows the progress of running actions, the results of failed ones, and the next scheduled action. Deleting an action document, archive, etc. won't affect the display of a process that's already started/finished.
If you think it's cosmetic issue, you can try restarting the monitor (Quit/SIGTERM the QRecall Monitor process).
|
|
|
maxbraketorque wrote:For the second question, there are no orphaned actions listed in the Actions window.
Oh, I think you might be referring to the Status window, not the actions or activity window. The Status window shows the state of all known, and previously known, archives. To make QRecall forget about the status of an archive, click on the action (gear) button, or Right/Control-click anywhere in that pane, then choose Forget.
|
|
|
QRecall doesn't track files. It doesn't matter if you delete a file and create a new one, or move the file, or rename the file, or swap two files, or .... well you get it. All QRecall knows is that a file that was there yesterday no longer exists, and a file that wasn't there yesterday now exists.
The disappearance of the old file will be noted, and the new file will be captured, de-duplicating all of its data against what has been previously captured.
So no new data gets added to the archive (except a little meta-data), but it does take time to perform all of that de-duplication.
If fast captures during the day are important and you have plenty of extra disk space, there is a feature just for this. In the capture action there's a "defer de-duplication" option. If you set that, no data de-duplication is performed during the capture. All new file data (~80GB) is simply appended to the archive. The next compact action that runs first begins by de-duplicating all of that deferred data. Note that this is slower than if the data had been de-duplicated during the capture, but it does give you the option of performing the de-duplication when it is more convenient.
|
|
|
maxbraketorque wrote:1) I decided to break out each of my four external drives into their own automated QR archive using the QR Assistant. If I want all four archive backups to run back-to-back, can I set their action times to all be say 15 minutes apart, e.g., start actions for backup1 at 8pm, actions for backup2 at 8:10 pm, etc? Will sets of actions for each archive run serially with sets of actions from other archives, or will each set of actions run in parallel if the prior set of actions for the archive before it haven't finished?
Overlapping actions that modify an archive will run sequentially (only one action can be modifying an archive at a time). Overlapping actions that just read an archive (recall, verify) and actions on other archives will all run concurrently. If that introduces performance or resource issues there are two settings in the QRecall Scheduler preferences that can help: Maximum concurrent actions and Maximum actions per volume. If you set Maximum concurrent actions to 1, only one scheduled action will run at a time. (*)These have no effect on commands initiated from the UI or the command-line.
2) When I was first experimenting with using QR, I created a few archives that I subsequently manually renamed and then subsequently deleted. Some actions for these deleted archives seem to be persisting in the QR monitor window. How do I find and remove these actions?
Open the Window > Actions window and find the actions that no longer have archives. Select them. Delete them.
|
|
|
See QRecall Help > Guide > Troubleshooting > Problems > Can't Open Archive ... and let us know if that helps.
|
|
|
QRecall does not copy files. It breaks them into blocks of data and adds the unique blocks to a massive database.
As the archive grows, the corpus of previously captured data grows. Every new block of data must query the captured set to determine if it's unique. This involves several layers of hash tables and indexes, and many of these tests will require data to be read from the archive, usually in a very random manner.
So I/O performance must be less than what you'd see if you simply wrote the files. There will be occasional reading of the archive during a capture, and the drive's seek time becomes an important performance metric. In short, it's a lot of work.
|
|
|
Note that the only thing I'm really concerned about is hard-linked directories being treated as separate directories in the QRecall archive that, when restored, will take up a lot more space. As a fall-back, you should be able to dig into the TM package and recall whatever specific items you want directly in QRecall.
|
|
|
maxbraketorque wrote:On a tangential note, I'm wondering whether its easier for QR to repair damage done to a few small files among a huge batch of files or whether its easier to repair a small amount of damage to a single large file. No issues right now. Just thinking about potential future liabilities.
I love it when people are thinking about potential failure liabilities I assume you're referring to archive data redundancy. That's implemented at the block level of the main data file, so the granularity of the archive content doesn't make it "easier" or "harder" to repair data. If any block in the file is damaged, there's a limited amount of correct data available to reconstruct it. However, the granularity of the archive does matter if the data can't be recovered. A single damaged block in a massive 10GB DMG file means that entire DMG file is probably a lost cause. While a single damaged block in a document file means you've lost one document out of millions. That's the difference.
|
|
|
maxbraketorque wrote:Just wondering if its feasible to rotate usage of the CPU cores to more evenly distribute heat production across the cores and keep max core temperatures down. My MacBookPro is getting fairly toasty during the initial backups of my external drives. QR seems to be favoring Core 1 and Core 2 with their temperatures consistently running in the mid-80C range while Core 3/4 are running in the mid-70C range.
What tasks get assigned to what CPU is completely outside QRecall's control. That's entirely the job of the Darwin kernel and I know of no way to influence it. Also note that modern, mobile, CPUs often have one core that's more powerful, with auxiliary cores that are more efficient. So intensive tasks vs. light/periodic tasks are going to favor one core, or one type of core, over others.
|
|
|
Norbert Karls wrote:At some point there has to be actual data again, and then the rest of the operation should finish in a more timely manner.
That's the hope!
Timeout an action: ... is there an equivalent for the command line?
The equivalent would be to obtain the PID of the QRecallHelper process that gets started by the tool, then start a timer that will send a SIGTERM after a while (a la (sleep 10800; kill $QRHELPERPID) &).
I can just configure an action once graphically and then, instead of composing the whole operation on the command line, run that action in the shell by name.
That's the "much easier" way.
Customers: what customers? You've been refusing to take money for upgrades for as long as I've known you, and that's about a full decade now.
Well, I really like my customers and I still want to build momentum. I have a new plan for 3.0 that will hopefully provide some subscription income, so wait for that.
speaking of staying afloat while completely reeling off topic: Dawn to Dusk isn't just you, is it?
It's largely me. I have contractors for a lot of tasks. I keep hoping to get enough regular revenue to hire some full-time engineering and support, but I haven't quite crested that milestone yet. There are other engineers, and there are disaster plans to go open-source if this COVID-things goes sideways...
|
|
|
QRecall can most certainly capture and restore a DMG file?it's just a file. People tend to use TM as an adjunct to QRecall. This is honestly the first time anyone has asked about getting meta and asking one backup program to backup the backup of another backup program.
|
|
|
maxbraketorque wrote:I have a few older Time Machine Backups.backupdb files on some HDDs attached to my "stationary" Mac. I'd like to backup these drives containing the Backups.backupdb files to my NAS, and I'm wondering whether QR can backup the Time Machine dbs and then properly restore the dbs to a future attached HDD. Based on what I've read so far, it appears that this should be no problem because QR seems to create a single monolithic file with its own internal structure, but I just wanted to verify.
I'm honestly not sure. I have no doubts QRecall can capture the Backups.backupdb package, but I'm scratching my head as to whether it would properly restore it. I say this because Apple added a special "hard-linked directory" feature to the HFS filesystem just for Time Machine. And while QRecall will properly capture and restore hard-linked files, I suspect hard-linked directories would just look like two separate directories. And the only software that seems to use this feature is Time Machine, so support was never added. I suspect you'd have better luck using asr or creating HFS+ disk images of the Time Machine backup volume. That, in theory, should persevere and restore the hard linked directories correctly.
And I have one other question - In my trial observations of QR in action, during the first Capture I'm seeing large amounts of data going back and forth between my Mac and my NAS. What's happening when the data goes from the NAS to the Mac? Verification?
QRecall doesn't just copy files. It chops them into small chunks and adds those chunks to a database. At a minimum, each block of new data has to be checked against the corpus of data already captured to make sure it's not a duplicate. That requires at least one, and often several, queries. In subsequent captures, it has to read the meta data of the previously captured file to determine what has changed. So there's a lot of back and forth data happening.
|
|
|
|
|