Message |
|
Things can get a little weird when your home folder isn't on the startup volume. However, in this case I can't see any reason why your setup shouldn't have worked. I'd suggest we start by getting a diagnostic report (in the QRecall app, choose Hello > Send Report). That will allow us to examine the actual capture and exclude paths involved. I suspect we might send you a test version of QRecall to try.
|
|
|
LeviTaylor wrote:Regarding the forum's transition to HTTPS, when can users expect this transition to be completed, ensuring a more secure browsing experience for all?
It's already finished.
|
|
|
LeviTaylor wrote:Can cycling CPU core usage help even out temperatures?
Not really. Most modern CPUs are multi-core, which means that all of the cores are on the same chip. So moving work around to different part of the same chip isn't going to change the energy expended by that chip. It's a moot point anyway, as regular programs have absolutely no control over this whatsoever. All task and CPU switching is managed by the kernel and there are precious few influences over that.
|
|
|
Darryl, Did you, by chance, "test" the stack feature by recovering the stack to a new archive? If you did, you've run the risk of reassigning the archive identifier of the original archive, so the stack now belongs to the archive you recovered, not the original. There's a long discussion about this in the help: QRecall Help > Guide > Stacks > Recover an Archive > Archive Doppelgängers. If you still have the recovered archive, use the steps in the "Archive Doppelgängers" sidebar to exchange the identifiers. If you don't, you can recover the stack again (just the empty archive, don't transfer any slices back), swap the identifier, and then discard the temporary archive. But you can only do this if you haven't performed any slice transfers from the recovered archive to the stack. In other words, you didn't recover an archive from the stack, perform some captures, upload those slices, and then say "Hey, that seems to work, I'm going back to the old archive now." If you did do the latter, or anything similar, then the safe solution is to delete your archive and recover the entire archive from the stack. And send a diagnostic report (Help > Send Report) just so we can review your log to make sure something else isn't going on.
|
|
|
But this did get me thinking. It shouldn't be too hard to put a throttle on the log output so if the process is trying to log a ridiculous amount of data, say more than 1,000,000 messages an hour, it can simply shut the log output off. I'll put that on the wish list for 3.x
|
|
|
Olefin,
Olfan wrote:By the time I took notice the log file was some 180GiB in size. I panicked, killall'ed QRecallHelper and deleted the QRecall.log so it wouldn't choke my Mac with clogged local storage.
That was clearly the right thing to do. Once the connection to the volume was broken, the helper process was useless anyway. QRecall's pretty fanatic about logging everything it does, but even I'm having trouble thinking of anything that would generate 180GB of log data without stopping. Most logging is self-limiting: you get an error, or three, or a hundred, but ultimately the process gives up, logs one final "I've given up" message, and terminates. The only code that will log an error and continue to plow ahead is during a repair, and that code (at least in QRecall 3.0) does limit the number of messages it logs before logging just a summary. There is also code that corrects slightly damaged data, but if the drive was dis-connected there's no way successive corrections could be successful. So without a peak at what was getting logged, I can't offer much in the way of useful suggestions, other that what you've already done.
|
|
|
The server transition was a little rougher than we'd hoped, but the new site is up and running. Let us know if you encounter any more problems.
|
|
|
We have a complete site redesign in the works, it just takes time. Until then, you can safely reach the site at http://www.qrecall.com/
|
|
|
Darryl, www.dawntodusksoftware.com is not a scam. That's the domain for the private company that develops QRecall. The site that's there was not supposed to be public (I forgot to exclude it from Google the last time it was uploaded). It's actually a sandbox to test the new www.qrecall.com site, which we expect it to go live about the same time that QRecall 3.0 is released. The new site and the QRecall 3.0 release should happen fairly soon (within the next couple of months).
|
|
|
Steven J Gold wrote:In this case, the updated apps will be captured and the old versions of the updated apps will eventually disappear from the archive, correct? And same thing for the apps in /Applications which have been deleted (and not replaced) since the last capture of /Applications, correct?
Correct and correct!
|
|
|
Steven J Gold wrote:Will they remain forever (or at least until replaced by another one-time capture?)
This is the correct answer. They will remain forever ... unless recaptured or manually deleted.
or will they be deleted from the archive by a future rolling merge?
They can never be deleted or replaced by capturing something else. The fundamental, conceptual, model of QRecall is that each layer captures just what has changed in the captured items. And merging layers containing the same items combines them into a single set of changes, essentially creating a single layer as if the earlier captures had not happened. In the case of an archive with two non-overlapping items, there's nothing to merge[1]. So recapturing your home folder will never combine with or replace any data captured in your /Applications folder, or items captured from another volume, or items captured by another owner. Those items are in other branches of the archive. Let us know if that helps clarify the concept. Footnote [1]: That's not, technically, true although it's conceptually true. If you capture /Applications you have a layer with /Applications. Then if you capture your home folder you now have a second layer with just your home folder. When these layers are merged, you end up with a single layer containing both /Applications and your home folder, just as if you had performed a single capture of those two items. But since these items don't overlap, no items are combined. Again, the same thing happens when you merge layers with items captured from different volumes.
|
|
|
Johannes wrote:I am trying to understand the concept behind the stacks.
An archive and a stack are logically equivalent, but physically different. In both, each layer represents the file data that changed since the previous layer. In an archive, all of the data is stored together in one big pool. In a stack, the minimum data required to describe each layer is stored in individual "chunks" (be that files or data objects) which are physically isolated from one another. This means the archive is efficient at tasks that require all of the information (capture, merge, etc.) while stacks are very efficient at copying and replacing individual layers with changes.
Too me it looks that it offers mainly an other level of redundancy.
That's exactly what it is. A stack is an efficient clone of an archive, organized in such a way that layers can be individually added and updated.
I have two scenarios where this might be of use, but I am not sure:
1) Instead of a file system backup of an archive to an other disk, I can now use a stack. Here the advantage seems clear: Instead of copying the whole file every time, stacks can update incrementally. Very handy if the backup location is online and bandwidth limited. Right?
Correct, and this is the primary use case of a stack. To have a (probably remote) copy of your archive that can be quickly and efficiently updated with new data as the archive grows and changes.
2) Instead of two independent archives on two disks I can now have an archive on one disk and the stack on an other. What's the advantage of stacks here?
A stack doesn't directly replace this scenario, but does support off-site swapping with three disks: 1: A primary drive with an archive that gets updated regularly. 2: A removable drive (A) containing a stack that duplicates the primary archive. 3: Another removable drive (B) containing a second stack that duplicates the primary archive. Then your back strategy looks like this. Constantly keep the archive up-to-date with captured files. Occasionally update the stack (A) on the first removable drive from the archive. On a regular (typically weekly) schedule, take the first removable drive (A) off-site, pick up the second removable drive (B) and bring it back. Immediately update the second removable stack (B) with all accumulated changes in the archive. Repeat. The worst likely disaster (i.e. fire) is you lose both the archive and the local removable stack drive. You'd then recover from the off-site stack. A more likely scenario is any one of the drives fail. If the archive drive files, simply restore it from the most recently updated stack. If a stack drive fails, simply replace it and create a new stack.
And one more question: Is it planned to support stacks via FTP?
That is possible, and within the design, but so far (at least until today), no one has asked for it. We're currently concentrating on filesystem, AWS, AWS-compapible, DropBox, GoogleDrive, and iCloud based stacks. But adding FTP wouldn't be difficult. (We've also considered R/W optical media.) I hope that helps.
|
|
|
Mark Gerber wrote:I'm pretty sure that my rolling merge from years ago maxed out at 2 or three years. So I think what probably happened was I must have activated the schedule without realizing it and a merge of some sort was performed after yesterday's capture.
You think correctly. The rolling merge has combined all of the history of Disk 1, Disk 2, and Disk 3 into a single layer, keeping only the last captured items in each volume. However, these are not duplicates. The single layer contains all three volumes. The contents of those volume are separate from one another, and you'd have to look at the capture date of the volume (or any item in the volume) to tell how far back it goes.
I'm interpreting this new state as there is only one layer duplicated three times.
There is a single layer that contains three separate volumes.
A quick look in one of those disks' ~/Documents folder shows I have some files going back to The Early Days so I guess everything is flattened but safe. I imagine it's what I would have done anyway but would have preferred agonizing over the choice for a few days.
Correct and correct.
Given all this, should I delete Disks 1 and 2 as duplicates of Disk 3 and move forward with just Disks 3 and 4?
Given that your rolling merge only maintains about three years of history, there's no point in trying to combine Disk 1, 2, or 3 with anything. As soon as you combine these volumes with Disk 4, the next merge will delete them (because they're too old). I suggest you simply select Disk 1, Disk 2, and Disk 3 and delete them (Archive >Delete Items). It will be a lot faster than combining them, only to have the next merge discard them.
I should clarify that all four disks I'm referring to are listed under one owner/volume. I can only select one at at time and the Archive>Combine Items? menu is grayed out.
An archive contains owners. Owners contain volumes (disks). Volumes contain files and folders. If you open an owner in the browser and are looking at a list of volumes, you should absolutely be able to select more than one using Shift+click or Command+Click. Try a different view (list view, for example) if you're having problems. I suggest switching to list view, select Disk 1, then while holding the Command key, click-select Disk 2 and Disk 3. Now you can select Archive > Delete Items to remove all three at once.
And then, as you wrote, my next compact will drastically reduce the size of the archive and reduce the time spent during capture/merge actions. Hope I have that right.
Absolutely correct, with the possible exception of excluding the word "drastically". QRecall's data de-duplication means that the same file captured in Disk 4, Disk 3, Disk 2, and Disk 1 is only stored once, and deleting three of those references doesn't remove that data. It does reduce the meta data for the other three references, but metadata records are typically only about 1%-2% of an archive. Capture won't go much faster because capture is only comparing new files with the volume being recaptured (Disk 4). The contents of Disk 3, 2, and 1 are irrelevant and are not consulted during the recapture. The actions that will be substantially faster are merge, compact, and verify, since you will have removed hundreds of millions of file and folder records that no longer need to be considered.
|
|
|
Mark, Glad to hear you're doing a little spring cleaning. Your current situation isn't grossly inefficient because all four volumes still share the same data. So any file captured in Disk 4 share its data with the same files in Disk 3, 2, and 1. But it is a little inefficient. Having multiple copies of the same volume means there's an initial layer for each volume with a complete copy of your entire directory system (essentially all of the file and folder metadata). But since metadata is typically only 1% to 2% of an archive, this isn't a big deal. It also prevents the merge from discarding the oldest versions of files in Disks 1, 2, and 3 because they're not part of the Disk 4 history. And finally, it makes it hard to find a really old file because you have to search for it across four different volumes. Since all four volumes are essentially the same volume, I would recommend combining them. Then the rolling merge, compact, and search will all work the way they're supposed to. But before you begin, I would examine your rolling merge and see how far into the past it preserves layers. If it's 5 years or less, there's no point in keeping Disk 1 and 2 at all, since the next rolling merge will merge those layers with the layers of Disk 3 (essentially removing the older volumes). If this is the case, I'd recommend you start by deleting volumes Disk 1 and Disk 2 from the archive, and then combine the remaining Disk 3 and Disk 4. If your rolling merge does go back more than 5 years, and you really want to keep all of that history, then just merge all of the volumes. And it's important to merge them all at once; don't to it piecemeal or you may not be able to merge some of them. Navigate to the root of the archive, select all of the volumes, and choose Archive > Combine Items. If the combine is successful (there are some obscure technical reasons why it can't be done), the history of all four volumes will be combined and you'll be left with a single volume (Disk 4) with a unified history. If the volumes can't be combined, try combining just Disk 2 through 4, or just 3 and 4, and then decide how long you want to keep the history in Disk 1 (and 2). In the end, your storage should be slightly more efficient and the next compact action will probably reduce the save of your archive. Good luck!
|
|
|
Paul Sheraton wrote:will qRecall do a backup and restore my whole MacOS including all settings and configurations? (like TimeMachine does).
Absolutely. QRecall lets you choose exactly how much you capture. This can include (or exclude) all of your user and/or system settings. If you capture the entire volume, all modifiable system files along with all users, their documents and preferences will be captured. Modern macOS (10.15 "Catalina" and later) installations add a little bit of a wrinkle to this. A macOS startup volume is now two volumes: an immutable image of the macOS system software called the "System" volume, and a companion mutable volume called the "Data" volume which stores all of your user data and everything that's modifiable. When you capture a startup volume, QRecall actually captures just the "Data" volume. The "System" volume is cryptographically signed by Apple and only be restored by the Apple installer. So there's no point in capturing it, or trying to restore it. To recover a startup volume, create a new APFS volume, restore the captured volume using QRecall (it will now contain just the "Data" portion), and then install macOS on that volume using the macOS installer (which can be done directly from the Internet using recovery mode). The installer will split the volume, install the "System" volume, and make the whole thing bootable again.
Also why doesn't this forum use SSL ?
This is because our server was designed and engineered long before HTTS became common/ubiquitous/preferred/required. We're in the process of transitioning to a new set of servers this year, so the website, forums, diagnostic report tracking, accounts management, and sales will all be running over HTTS.
|
|
|
|
|