Message |
|
Note that the only thing I'm really concerned about is hard-linked directories being treated as separate directories in the QRecall archive that, when restored, will take up a lot more space. As a fall-back, you should be able to dig into the TM package and recall whatever specific items you want directly in QRecall.
|
 |
|
maxbraketorque wrote:On a tangential note, I'm wondering whether its easier for QR to repair damage done to a few small files among a huge batch of files or whether its easier to repair a small amount of damage to a single large file. No issues right now. Just thinking about potential future liabilities.
I love it when people are thinking about potential failure liabilities I assume you're referring to archive data redundancy. That's implemented at the block level of the main data file, so the granularity of the archive content doesn't make it "easier" or "harder" to repair data. If any block in the file is damaged, there's a limited amount of correct data available to reconstruct it. However, the granularity of the archive does matter if the data can't be recovered. A single damaged block in a massive 10GB DMG file means that entire DMG file is probably a lost cause. While a single damaged block in a document file means you've lost one document out of millions. That's the difference.
|
 |
|
maxbraketorque wrote:Just wondering if its feasible to rotate usage of the CPU cores to more evenly distribute heat production across the cores and keep max core temperatures down. My MacBookPro is getting fairly toasty during the initial backups of my external drives. QR seems to be favoring Core 1 and Core 2 with their temperatures consistently running in the mid-80C range while Core 3/4 are running in the mid-70C range.
What tasks get assigned to what CPU is completely outside QRecall's control. That's entirely the job of the Darwin kernel and I know of no way to influence it. Also note that modern, mobile, CPUs often have one core that's more powerful, with auxiliary cores that are more efficient. So intensive tasks vs. light/periodic tasks are going to favor one core, or one type of core, over others.
|
 |
|
Norbert Karls wrote:At some point there has to be actual data again, and then the rest of the operation should finish in a more timely manner.
That's the hope!
Timeout an action: ... is there an equivalent for the command line?
The equivalent would be to obtain the PID of the QRecallHelper process that gets started by the tool, then start a timer that will send a SIGTERM after a while (a la (sleep 10800; kill $QRHELPERPID) &).
I can just configure an action once graphically and then, instead of composing the whole operation on the command line, run that action in the shell by name.
That's the "much easier" way.
Customers: what customers? You've been refusing to take money for upgrades for as long as I've known you, and that's about a full decade now.
Well, I really like my customers and I still want to build momentum. I have a new plan for 3.0 that will hopefully provide some subscription income, so wait for that.
speaking of staying afloat while completely reeling off topic: Dawn to Dusk isn't just you, is it?
It's largely me. I have contractors for a lot of tasks. I keep hoping to get enough regular revenue to hire some full-time engineering and support, but I haven't quite crested that milestone yet. There are other engineers, and there are disaster plans to go open-source if this COVID-things goes sideways...
|
 |
|
QRecall can most certainly capture and restore a DMG file?it's just a file. People tend to use TM as an adjunct to QRecall. This is honestly the first time anyone has asked about getting meta and asking one backup program to backup the backup of another backup program. 
|
 |
|
maxbraketorque wrote:I have a few older Time Machine Backups.backupdb files on some HDDs attached to my "stationary" Mac. I'd like to backup these drives containing the Backups.backupdb files to my NAS, and I'm wondering whether QR can backup the Time Machine dbs and then properly restore the dbs to a future attached HDD. Based on what I've read so far, it appears that this should be no problem because QR seems to create a single monolithic file with its own internal structure, but I just wanted to verify.
I'm honestly not sure. I have no doubts QRecall can capture the Backups.backupdb package, but I'm scratching my head as to whether it would properly restore it. I say this because Apple added a special "hard-linked directory" feature to the HFS filesystem just for Time Machine. And while QRecall will properly capture and restore hard-linked files, I suspect hard-linked directories would just look like two separate directories. And the only software that seems to use this feature is Time Machine, so support was never added. I suspect you'd have better luck using asr or creating HFS+ disk images of the Time Machine backup volume. That, in theory, should persevere and restore the hard linked directories correctly.
And I have one other question - In my trial observations of QR in action, during the first Capture I'm seeing large amounts of data going back and forth between my Mac and my NAS. What's happening when the data goes from the NAS to the Mac? Verification?
QRecall doesn't just copy files. It chops them into small chunks and adds those chunks to a database. At a minimum, each block of new data has to be checked against the corpus of data already captured to make sure it's not a duplicate. That requires at least one, and often several, queries. In subsequent captures, it has to read the meta data of the previously captured file to determine what has changed. So there's a lot of back and forth data happening.
|
 |
|
Norbert, Sorry to hear you've run into these problems. You have the poor misfortune to have hit one a couple of QRecall deficiencies (most of which have already been addressed in future versions, just not the one you have now  ). First, I'm surprised the compact action got killed for taking up too much RAM. Compact shouldn't use that much RAM; it builds a couple of large record indexes (a few hundred MB at most) and after that it mostly just reads and writes records to relocate them. I suppose there could be a memory leak in there that I'm unaware of, but that's the only explanation that jumps to mind. Sadly, compact is (currently) one of those actions that can't be gracefully recovered from if it crashes. That's why the archive must be repaired afterwards. An interrupted compact can leave a lot of detritus in the archive's primary data file that the repair must deal with, and that's what the repair is running into. The primary data file is composed of a sequence of records. Each record has a header, type, content, and a checksum. The repair simply starts at the beginning of the file and tries to read every record. If the next record is intact (header, type, content structure, and checksum all verify and appear to contain usable data) that record is added back to the archive and the repair moves to the next record. If the record is damaged, or appears to be suspect in any way, a failure is logged, the repair advances 8 bytes (all records are aligned to 8-byte boundaries) and it tries again. That's what you're seeing in the log; an attempt to read a record at every 8 byte offset in the file. Now what is supposed to happen is that the repair logs the first few failures, and then stops logging failures until it finds the next valid record, where it logs a summary about how much data it had to skip over to get there. But that logic doesn't always work in the current version.  And that's why you're getting GB of log output. So with that in mind,
Is this operation likely to finish at all or does the archive seem to be absolutely broken? Why/why not?
A lot of work has gone into making the repair as robust as possible, so if you can read the file it should be repairable. First, I would suggest simply piping the log to /dev/null. Second, since you're working on a copy of the archive there's no reason to use the --recover option; you're just creating a lot more work for the repair (and approximately doubling the amount to time it will take to finish). BTW, it's nice to see someone using the command line tools My suggestion is to start a simple repair, ignoring the log, and see if it finishes. (I fully expect it to.) Having said that, I'm still not sure why the repair would be using an excessive amount of memory. But if you can get the repair across the finish line, the archive should be usable again (and largely intact).
Is logging expensive? If so, can I give a command line argument asking those many many »Details at?« lines to not be logged at all?
Logging isn't overly expensive, but in your case QRecall is logging too much. It's simply a bug.
Is it normal that those logged Details are only eight Bytes in size, and that there are this many of them?
Yes. No.
I found an »available memory« option in the Advanced preferences and set it to 8192. Will this help future runs? (Right now, QRecallHelper seems to keep its resident memory below 8G)
Not much. QRecall already tries to keep its memory usage to what's physically installed (up to 8GB). If you have that much memory, performance will be better if you leave the memory limits alone. That option is really only for situations where you want to limit QRecall from trying to use all of the physical RAM which might compete with other long-running processes (like database servers or other work you're trying to get done).
Is it possible to ask a Compact operation in an extra safe way, like when it was running on a network share that might disappear anytime?
That exactly situation is impossible to protect against. It's a catch-22, once the volume is unreachable there's no way to take any corrective action on the data. The only way to protect against this would be to make a copy of the entire archive and compact that copy; that would require twice the storage and actually perform more I/O than the compact does. Having said that, I'll tease that QRecall 3.0 now does this on filesystems that support cloning (i.e. APFS). The archive's primary data file is first cloned, and the clone of the archive is compacted. If anything goes wrong, the original version of the archive is still there and it simply reverts back to the previous file. Finally, let me close with a suggestion. Once the archive is repaired and you're ready to try compacting it again. I suggest setting a time limit for the compact by adding a schedule condition that will "Stop after 3 (or so) hours". The compact is incremental and, if canceled, will pick up where it left off the next time it starts. So if you're concerned that the compact will run out of VM, simply prevent it from running too long and start it again the next day.
Please do stay at home and healthy, we can ill afford to lose you
And the same to you and yours! I don't know what I'd do if I didn't have any customers. 
|
 |
|
James Coffey wrote:I am monitoring 1 folder with nested sub folders
That will require only one change monitor. It doesn't matter how many subfolders, or how deep, that folder hierarchy is. It could contain thousands of subfolders, macOS doesn't care.
|
 |
|
James Coffey wrote:I set the Archive to my archive, Selected a folder I want to have files captured, and scheduled it when captured item changes.
That should do the trick. The event schedule will trigger when anything in that capture folder changes, which includes saving a new version of a document.
Do I understand how it works?
It appears you do.
For Application launches event, I selected the desired folder and application. I assume this will result in archiving any changed document for the specified folder when the application is opened.
That event will cause an action to run whenever an application is launched or quits. If you're interested in capturing documents, you would normally trigger the capture when the application quits, since it's assumed that all of your documents have been saved and are ready to capture.
This seems to work. Am I missing anything?
I will mention a caveat. The "capture when items change" event uses the macOS filesystem change event service to watch for any changes within any of the folders listed in the capture action. This requires installing one change monitor for each captured item. But there are a limited number of monitors the system will allow an application to monitor at a time. No hard limit is specified in the documentation, but that limit is described simply as "a few." I've have no problem monitoring five or six folders for changes at once. However, users have reported that trying to monitor dozens of different folders simultaneously does not work; macOS will simply ignore some of the requests. Just so you know.
|
 |
|
Steven Haver wrote:It's tough only because half the year I have 300Mbps internet and the other half of the year I only have 5Mbps. In the first case, doing a fresh install via the internet is a breeze. In the second case, it's an overnight ordeal at best. But the future is bright.
You can prepare for this in advance in several ways. One is to set up that external emergency boot volume with a copy of QRecall on it, and also download the Catalina installer. The downloaded installer is self contained and it will be ready to go when you need it. An even more surgical approach is to download the Catalina installer then use the command-line tool to build a stand-alone Catalina installer volume (again, a modest sized USB thumb drive is perfect for this). When when disaster strikes, simply boot from the stick and start the reinstall.
Can QRecall capture two different volumes to the same archive? If so, are there any obvious advantages or disadvantage to doing it that way? Would it be better to keep them as two separate archives?
QRecall lets you do whatever you want. You can capture multiple volumes, from multiple systems, to the same archive or split up your captures to separate archives, in whatever combination makes sense. The advantage of a single archive is that you take full advantage of data de-duplication across all files and volumes. The disadvantage is that the archive can ge pretty big, which makes verify, compact and such more time consuming.
|
 |
|
Steven Haver wrote:Hello!
Greetings! That's a lot of questions, but I'll see what I can do...
As far as I can tell, there's not an easy way to create/restore static images on an APFS volume.
There are, but I wouldn't call them easy. Apple's asr command-line utility was extensively modified to make and transfer copies of APFS volumes and containers. And I believe tools like Carbon Copy Cloner has some of this functionality baked in. I sometime use these during testing (to quickly create a freshly installed operating system, for example), but these days I'm not a fan of trying to preserve copies of your system volume for later restoration (I'll explain later).
I see that APFS supports snapshots but they seem limited to me: 1) the snapshot is stored on the local disk, so in the event of failure that snapshot doesn't exist anywhere else. 2) Is there a risk that some malware finds a way to escalate privileges and then mark itself as part of a previous snapshot? Perhaps that is a baseless concern. 3) The OS seems to have all the control over snapshots, so even if I make what I consider to be the perfect snapshot, OS X may at some point in the future decide that snapshot is old and delete it (keeping, perhaps, newer ones that it made itself but are not useful to me).
1) Snapshots are not backups 2) This is impossible. Ignoring how the malware would get these privileges in the first place, snapshots are read only. There is nothing in APFS that allows code to modify a snapshot once it's been taken, so no ... malware can't use a snapshot to "fly under the radar" as it were. 3) Again, snapshots are not backups. Snapshots are used for versioning, and are a tool for making clean backups. But they are not backups nor are they substitutes for backups. Backups (including QRecall) use snapshots to "freeze" the state of the volume so that it can leisurely backup all of the data as it existed at a particular instant in time. macOS also uses multiple snapshots on laptops, when mobile, to persevere the state of the volume at different times so, when the laptop is later reunited with it's backup volume, all of those snapshots can be transferred to the backup. But then all of the snapshots are discarded. So think of snapshots as a temporary vessel for a backup, but it isn't a backup and they aren't persistent.
One of my concerns is for my software licenses. I do audio work, and I own quite a few audio plugins. Many of them are registered by a key and often that key is single use.
QRecall (like all competent backup programs) will persevere, and later restore, all of the files on your system volume. If your software license keys are based on the data and metadata of those files you should be fine. However, some licensing enforcement schemes use other information (like taking a fingerprint of your hard drive or saving the inode of a file) and that's beyond the scope of backup software. I remember once adding memory to my computer, only to find that Photoshop wouldn't launch anymore because it was convinced I had copied it to another computer. QRecall can't protect against that.
My other concern is about backing up while the OS is running. Are there any risks for doing live captures of Catalina?
Returning to snapshots, Catalina, system volumes, and QRecall. Catalina splits your startup volume into two different volumes. A read-only System volume containing the entire core operating system, and a read-write Data volume that contains all of your data and anything that changes (preferences, cache, history, ...). This fact is hidden from the casual user (and most software) with some slight-of-hand that makes these two volume appear to be a single volume. The idea is that malware can't modify or corrupt any system file because the volume it's on can't be modified. Because the System volume is read-only and can only be created by Apple's macOS Installer, QRecall no longer backs up the System volume. It only captures the read-write Data volume. Following some catastrophe, you can restore your entire system by restoring all of data from your archive to a (newly created APFS) volume, then run the macOS Installer to turn that volume into a bootable System+Data pair. Having a minimal system and a copy of QRecall on a thumb drive makes this simple: Boot from external USB thumb drive, use Disk Utility to create/format an empty APFS volume, open QRecall archive and restore entire startup volume, run macOS Installer, and you're back in business. (Note if your archive is stored on a bootable hard drive you can simply install an emergency copy of QRecall and the OS on the same volume.) The advantages here are: (a) minimal archive size because QRecall isn't backing up system software it can't restore on it's own anyway and (b) a clean operating system is always reinstalled from a safe, reliable source.
Would there be any benefit to making a bootable clone (with CCC or Super Duper or something?I've never used either) and running the initial QRecall capture from the clone so that the internal start up volume is not active at the time?
No advantage because of snapshots. Again, every QRecall capture starts by making a temporary snapshot of your volume and then it captures that snapshot. And in closing, I'm not a fan of partitions ... they're soooo 1990's. All modern filesystem (ZFS, APFS, ...) treat a "volume" as a fluid, flexible, entity that can dynamically resize itself and span multiple physical devices. Making partitions just makes your life harder. That's my opinion, at least.
|
 |
|
Hello! With any luck, version 3 should be out by this summer. There will be plenty of notice because we usually run several months of beta testing, which you're welcome to participate in, before making a final release. There are no plans to change the licensing for version 3. It will be a free upgrade for all version 2 users. A new website is also in development, which is why we've been lazy about updating the current one. I hope that answers your questions!
|
 |
|
gipiy wrote:uncheck the "Unmount volumes mounted by actions" scheduler option on the Mini doesn't seem to work
If you've unchecked this option, but the volume continues to be unmounted at the end of an action, please send a diagnostic report (QRecall > Help > Send Report) and we'll look into it.
|
 |
|
Patterns were designed to be flexible, but that also makes them complicated. Since Daten is the volume, the glob exclude pattern would be /CaptureOne Fotobibliothek // _LEGACY Christian Aperture Fotos Now if you wanted to get fancy, the following glob pattern would exclude every folder inside the CaptureOne Fotobibliothek folder that begin with the word "_LEGACY" /CaptureOne Fotobibliothek // _LEGACY* Now you can stop excluding folders individually inside that one folder. Just start the folder name with "_LEGACY" and it gets excluded automatically.
|
 |
|
I suspect there's a bug in the exclusion logic. When Mojave added snapshots and Catalina added split system/data volumes, the job of trying to figure out if an item on a snapshot of a data volume mounted on a system volume is the same item as a bookmark on the startup volume got ... well ... complicated. The exclusion logic in QRecall 3.0 has been completely reengineered so I probably won't try to address this in 2.x. As a workaround, have you tried excluding the item using capture preferences (on the item) or using an exclusion pattern (in the archive settings)? And, as an aside, the capture decisions that get logged are only for items that were previously captured in the archive when QRecall is trying to decide to recapture or not. (Maybe I should rename it "log re-capture decisions"?) Since you deleted the item in the archive, there was no "decision" to be made.
|
 |
|
|
|