Message |
|
Norbert, Sorry to hear you've run into these problems. You have the poor misfortune to have hit one a couple of QRecall deficiencies (most of which have already been addressed in future versions, just not the one you have now ). First, I'm surprised the compact action got killed for taking up too much RAM. Compact shouldn't use that much RAM; it builds a couple of large record indexes (a few hundred MB at most) and after that it mostly just reads and writes records to relocate them. I suppose there could be a memory leak in there that I'm unaware of, but that's the only explanation that jumps to mind. Sadly, compact is (currently) one of those actions that can't be gracefully recovered from if it crashes. That's why the archive must be repaired afterwards. An interrupted compact can leave a lot of detritus in the archive's primary data file that the repair must deal with, and that's what the repair is running into. The primary data file is composed of a sequence of records. Each record has a header, type, content, and a checksum. The repair simply starts at the beginning of the file and tries to read every record. If the next record is intact (header, type, content structure, and checksum all verify and appear to contain usable data) that record is added back to the archive and the repair moves to the next record. If the record is damaged, or appears to be suspect in any way, a failure is logged, the repair advances 8 bytes (all records are aligned to 8-byte boundaries) and it tries again. That's what you're seeing in the log; an attempt to read a record at every 8 byte offset in the file. Now what is supposed to happen is that the repair logs the first few failures, and then stops logging failures until it finds the next valid record, where it logs a summary about how much data it had to skip over to get there. But that logic doesn't always work in the current version. And that's why you're getting GB of log output. So with that in mind,
Is this operation likely to finish at all or does the archive seem to be absolutely broken? Why/why not?
A lot of work has gone into making the repair as robust as possible, so if you can read the file it should be repairable. First, I would suggest simply piping the log to /dev/null. Second, since you're working on a copy of the archive there's no reason to use the --recover option; you're just creating a lot more work for the repair (and approximately doubling the amount to time it will take to finish). BTW, it's nice to see someone using the command line tools My suggestion is to start a simple repair, ignoring the log, and see if it finishes. (I fully expect it to.) Having said that, I'm still not sure why the repair would be using an excessive amount of memory. But if you can get the repair across the finish line, the archive should be usable again (and largely intact).
Is logging expensive? If so, can I give a command line argument asking those many many »Details at?« lines to not be logged at all?
Logging isn't overly expensive, but in your case QRecall is logging too much. It's simply a bug.
Is it normal that those logged Details are only eight Bytes in size, and that there are this many of them?
Yes. No.
I found an »available memory« option in the Advanced preferences and set it to 8192. Will this help future runs? (Right now, QRecallHelper seems to keep its resident memory below 8G)
Not much. QRecall already tries to keep its memory usage to what's physically installed (up to 8GB). If you have that much memory, performance will be better if you leave the memory limits alone. That option is really only for situations where you want to limit QRecall from trying to use all of the physical RAM which might compete with other long-running processes (like database servers or other work you're trying to get done).
Is it possible to ask a Compact operation in an extra safe way, like when it was running on a network share that might disappear anytime?
That exactly situation is impossible to protect against. It's a catch-22, once the volume is unreachable there's no way to take any corrective action on the data. The only way to protect against this would be to make a copy of the entire archive and compact that copy; that would require twice the storage and actually perform more I/O than the compact does. Having said that, I'll tease that QRecall 3.0 now does this on filesystems that support cloning (i.e. APFS). The archive's primary data file is first cloned, and the clone of the archive is compacted. If anything goes wrong, the original version of the archive is still there and it simply reverts back to the previous file. Finally, let me close with a suggestion. Once the archive is repaired and you're ready to try compacting it again. I suggest setting a time limit for the compact by adding a schedule condition that will "Stop after 3 (or so) hours". The compact is incremental and, if canceled, will pick up where it left off the next time it starts. So if you're concerned that the compact will run out of VM, simply prevent it from running too long and start it again the next day.
Please do stay at home and healthy, we can ill afford to lose you
And the same to you and yours! I don't know what I'd do if I didn't have any customers.
|
|
|
James Coffey wrote:I am monitoring 1 folder with nested sub folders
That will require only one change monitor. It doesn't matter how many subfolders, or how deep, that folder hierarchy is. It could contain thousands of subfolders, macOS doesn't care.
|
|
|
James Coffey wrote:I set the Archive to my archive, Selected a folder I want to have files captured, and scheduled it when captured item changes.
That should do the trick. The event schedule will trigger when anything in that capture folder changes, which includes saving a new version of a document.
Do I understand how it works?
It appears you do.
For Application launches event, I selected the desired folder and application. I assume this will result in archiving any changed document for the specified folder when the application is opened.
That event will cause an action to run whenever an application is launched or quits. If you're interested in capturing documents, you would normally trigger the capture when the application quits, since it's assumed that all of your documents have been saved and are ready to capture.
This seems to work. Am I missing anything?
I will mention a caveat. The "capture when items change" event uses the macOS filesystem change event service to watch for any changes within any of the folders listed in the capture action. This requires installing one change monitor for each captured item. But there are a limited number of monitors the system will allow an application to monitor at a time. No hard limit is specified in the documentation, but that limit is described simply as "a few." I've have no problem monitoring five or six folders for changes at once. However, users have reported that trying to monitor dozens of different folders simultaneously does not work; macOS will simply ignore some of the requests. Just so you know.
|
|
|
Steven Haver wrote:It's tough only because half the year I have 300Mbps internet and the other half of the year I only have 5Mbps. In the first case, doing a fresh install via the internet is a breeze. In the second case, it's an overnight ordeal at best. But the future is bright.
You can prepare for this in advance in several ways. One is to set up that external emergency boot volume with a copy of QRecall on it, and also download the Catalina installer. The downloaded installer is self contained and it will be ready to go when you need it. An even more surgical approach is to download the Catalina installer then use the command-line tool to build a stand-alone Catalina installer volume (again, a modest sized USB thumb drive is perfect for this). When when disaster strikes, simply boot from the stick and start the reinstall.
Can QRecall capture two different volumes to the same archive? If so, are there any obvious advantages or disadvantage to doing it that way? Would it be better to keep them as two separate archives?
QRecall lets you do whatever you want. You can capture multiple volumes, from multiple systems, to the same archive or split up your captures to separate archives, in whatever combination makes sense. The advantage of a single archive is that you take full advantage of data de-duplication across all files and volumes. The disadvantage is that the archive can ge pretty big, which makes verify, compact and such more time consuming.
|
|
|
Steven Haver wrote:Hello!
Greetings! That's a lot of questions, but I'll see what I can do...
As far as I can tell, there's not an easy way to create/restore static images on an APFS volume.
There are, but I wouldn't call them easy. Apple's asr command-line utility was extensively modified to make and transfer copies of APFS volumes and containers. And I believe tools like Carbon Copy Cloner has some of this functionality baked in. I sometime use these during testing (to quickly create a freshly installed operating system, for example), but these days I'm not a fan of trying to preserve copies of your system volume for later restoration (I'll explain later).
I see that APFS supports snapshots but they seem limited to me: 1) the snapshot is stored on the local disk, so in the event of failure that snapshot doesn't exist anywhere else. 2) Is there a risk that some malware finds a way to escalate privileges and then mark itself as part of a previous snapshot? Perhaps that is a baseless concern. 3) The OS seems to have all the control over snapshots, so even if I make what I consider to be the perfect snapshot, OS X may at some point in the future decide that snapshot is old and delete it (keeping, perhaps, newer ones that it made itself but are not useful to me).
1) Snapshots are not backups 2) This is impossible. Ignoring how the malware would get these privileges in the first place, snapshots are read only. There is nothing in APFS that allows code to modify a snapshot once it's been taken, so no ... malware can't use a snapshot to "fly under the radar" as it were. 3) Again, snapshots are not backups. Snapshots are used for versioning, and are a tool for making clean backups. But they are not backups nor are they substitutes for backups. Backups (including QRecall) use snapshots to "freeze" the state of the volume so that it can leisurely backup all of the data as it existed at a particular instant in time. macOS also uses multiple snapshots on laptops, when mobile, to persevere the state of the volume at different times so, when the laptop is later reunited with it's backup volume, all of those snapshots can be transferred to the backup. But then all of the snapshots are discarded. So think of snapshots as a temporary vessel for a backup, but it isn't a backup and they aren't persistent.
One of my concerns is for my software licenses. I do audio work, and I own quite a few audio plugins. Many of them are registered by a key and often that key is single use.
QRecall (like all competent backup programs) will persevere, and later restore, all of the files on your system volume. If your software license keys are based on the data and metadata of those files you should be fine. However, some licensing enforcement schemes use other information (like taking a fingerprint of your hard drive or saving the inode of a file) and that's beyond the scope of backup software. I remember once adding memory to my computer, only to find that Photoshop wouldn't launch anymore because it was convinced I had copied it to another computer. QRecall can't protect against that.
My other concern is about backing up while the OS is running. Are there any risks for doing live captures of Catalina?
Returning to snapshots, Catalina, system volumes, and QRecall. Catalina splits your startup volume into two different volumes. A read-only System volume containing the entire core operating system, and a read-write Data volume that contains all of your data and anything that changes (preferences, cache, history, ...). This fact is hidden from the casual user (and most software) with some slight-of-hand that makes these two volume appear to be a single volume. The idea is that malware can't modify or corrupt any system file because the volume it's on can't be modified. Because the System volume is read-only and can only be created by Apple's macOS Installer, QRecall no longer backs up the System volume. It only captures the read-write Data volume. Following some catastrophe, you can restore your entire system by restoring all of data from your archive to a (newly created APFS) volume, then run the macOS Installer to turn that volume into a bootable System+Data pair. Having a minimal system and a copy of QRecall on a thumb drive makes this simple: Boot from external USB thumb drive, use Disk Utility to create/format an empty APFS volume, open QRecall archive and restore entire startup volume, run macOS Installer, and you're back in business. (Note if your archive is stored on a bootable hard drive you can simply install an emergency copy of QRecall and the OS on the same volume.) The advantages here are: (a) minimal archive size because QRecall isn't backing up system software it can't restore on it's own anyway and (b) a clean operating system is always reinstalled from a safe, reliable source.
Would there be any benefit to making a bootable clone (with CCC or Super Duper or something?I've never used either) and running the initial QRecall capture from the clone so that the internal start up volume is not active at the time?
No advantage because of snapshots. Again, every QRecall capture starts by making a temporary snapshot of your volume and then it captures that snapshot. And in closing, I'm not a fan of partitions ... they're soooo 1990's. All modern filesystem (ZFS, APFS, ...) treat a "volume" as a fluid, flexible, entity that can dynamically resize itself and span multiple physical devices. Making partitions just makes your life harder. That's my opinion, at least.
|
|
|
Hello! With any luck, version 3 should be out by this summer. There will be plenty of notice because we usually run several months of beta testing, which you're welcome to participate in, before making a final release. There are no plans to change the licensing for version 3. It will be a free upgrade for all version 2 users. A new website is also in development, which is why we've been lazy about updating the current one. I hope that answers your questions!
|
|
|
gipiy wrote:uncheck the "Unmount volumes mounted by actions" scheduler option on the Mini doesn't seem to work
If you've unchecked this option, but the volume continues to be unmounted at the end of an action, please send a diagnostic report (QRecall > Help > Send Report) and we'll look into it.
|
|
|
Patterns were designed to be flexible, but that also makes them complicated. Since Daten is the volume, the glob exclude pattern would be /CaptureOne Fotobibliothek // _LEGACY Christian Aperture Fotos Now if you wanted to get fancy, the following glob pattern would exclude every folder inside the CaptureOne Fotobibliothek folder that begin with the word "_LEGACY" /CaptureOne Fotobibliothek // _LEGACY* Now you can stop excluding folders individually inside that one folder. Just start the folder name with "_LEGACY" and it gets excluded automatically.
|
|
|
I suspect there's a bug in the exclusion logic. When Mojave added snapshots and Catalina added split system/data volumes, the job of trying to figure out if an item on a snapshot of a data volume mounted on a system volume is the same item as a bookmark on the startup volume got ... well ... complicated. The exclusion logic in QRecall 3.0 has been completely reengineered so I probably won't try to address this in 2.x. As a workaround, have you tried excluding the item using capture preferences (on the item) or using an exclusion pattern (in the archive settings)? And, as an aside, the capture decisions that get logged are only for items that were previously captured in the archive when QRecall is trying to decide to recapture or not. (Maybe I should rename it "log re-capture decisions"?) Since you deleted the item in the archive, there was no "decision" to be made.
|
|
|
Prion, I'm assuming that the USB volume on the Mini is physically connected all the time, but not mounted all of the time (for whatever reason). The QRecall scheduler running on the Mini should auto-mount the local USB drive before any scheduled action is started. And, the QRecall scheduler running on the laptop should auto-login to a remote server volume before staring its capture. However, if the USB drive on the Mini (the server) isn't mounted, a remote QRecall (the laptop) can't force that volume to mount on the server. So that's the bad news. The good news is that there are still potential solutions. Since QRecall on the Mini should auto-mount any physically connected device before an action runs, you could simply schedule an action that runs on the Mini to occur about the same time as the backup from the laptop. For example, if the laptop capture runs every night at 20:00, you could schedule a merge action on the Mini to run every night at 19:59. (A merge with nothing to do will complete within a few seconds, but has the side effect that the volume it mounts is still mounted a minute later when the laptop needs it. For that solution to work, you'll want to uncheck the "Unmount volumes mounted by actions" scheduler option on the Mini. A more "elegant" (read "nerdy") solution would be to attach a prolog script to the capture action that would use ssh to execute a script on the Mini that would mount the volume. This would require a script on the server to identify the device that the volume is associated with and use the diskutil command to mount it. You'd also have to set up ssh to connect to the Mini without a password (by generating and installing a public/private key pair).
|
|
|
A security update on our server inadvertently broke our diagnostic report server. Some diagnostic reports (Help > Send Report, in the QRecall application) sent in the past week were successfully uploaded to our report server, but failed to be forwarded to our problem tracking database and have been lost forever. So if you sent a report during the last week of January or the first week of February, 2020, and have not gotten a response, either send another report or contact support about your issue. We apologize for any inconvenience.
|
|
|
Hanno, QRecall only captures what's on the physical drive. So placeholders or links to items in the cloud (or even just a different volume) get captured as is; QRecall doesn't go out a try to read what that item refers too. In this situation the cloud copy is really your "backup" and all reputable cloud storage providers have their own backup mechanism.
|
|
|
What you're describing sounds like a rolling merge. A rolling merge is a merge action that groups older layers into time periods (days, weeks, months, years) and, as the layers age, automatically merges those groups into single layers. In your example, a rolling merge something like
Keep most recent: 9 months Followed by: 0 day layers 0 week layers 0 fortnight layers 12 month layers 1 year layer
would do approximately what you're asking. When you run this rolling merge action, any layers within the past 9 months would be preserved. Layers in groups of months, going back 12 more months, would be merged into single layers (one layer per month). Any layers in the pervious whole year would be merged into a single layer, and then anything before that would be merged into its own layer. So if you ran this today (January 2020), the merge would: Leave whatever layers you'd captured since April 1, 2019 untouched (that's the "keep" part) Merge all layers in March, 2019 into a single layer Merge all layers in February 2019 into a single layer Merge all layers in January 2019 into a single layer .... Merge all layers in June 2018 into a single layer Merge all layers in April 2018 into a single layer Merge all layers between April 2017 and March 2018 into a single layer Merge all layers before April 2017 into a single layer The beauty of the rolling merge is that it "rolls." You can run this action twice a day if you want, but it would only do something if there were multiple layers within a group. For this action, it wouldn't do anything more until you ran it again in February. When this action runs again in February 2020, now there are a group of layers in April 2019 that get merged into a single layer. And when it's run in October of 2020, all of the single-month layers between January 2018 and December 2018 now fall in the year tear and get merged together. The new rolling merge action editor has a really cool (if I don't say so myself) animation that will let you preview the effect of the rolling merge as you edit it. And there's always QRecall Help > Guide > Actions > Layers > Rolling Merge. Post again if you have any more questions.
|
|
|
Welcome aboard! That warning is simply letting you know that not all of your changes were captured by that layer. So if you had five items that changed, the capture was canceled after only three of them were captured, you later rewind to that layer and restore that layer, you'll have two items missing or they'll be older (previously captured) versions?in other words, not the versions that existed when that layer was captured. When you start the next capture, QRecall will (always!) pick up where it left off capturing the two items that didn't get captured in the previous layer. And if you want to be tidy, merging those two layers will result in a layer that is complete and up-to-date and effectively nullifies the warning your received on the first layer. To recall an item (or items) from the archive, you can simply drag the item from the archive back to your hard drive like a Finder copy. The "Restore" command does much the same, but instead of you deciding where the recalled copy goes, you simply select the items in the archive; QRecall finds the original items and replaces them with the restored version. I hope that helps. Please post any additional questions you have or send them to suppport@qrecall.com
|
|
|
This sounds like it's really stuck. If it's still running, please send a diagnostic report first. Open the QRecall app, choose Help > Send Report.... The diagnostic report will sample all running QRecall components which will help identify exactly where it's at. Afterwards, go ahead and kill the process and try the repair again. If the second try gets stuck too you might have a stale file locking semaphore on the file server. There's a description of file locking / access problems in the help (Help > QRecall Help > Guide > Trouble Shooting > Problems > Can't Open Archive). TL;DR: Restart your system AND your file server and try again. Please let us know what worked (or didn't).
|
|
|
|
|