QRecallDownloadIdentity KeysForumsSupport
  [Search] Search   [Recent Topics] Recent Topics   [Hottest Topics] Hottest Topics   [Groups] Back to home page 
Messages posted by: James Bucanek
Forum Index » Profile for James Bucanek » Messages posted by James Bucanek
Author Message
Steven J Gold wrote:In this case, the updated apps will be captured and the old versions of the updated apps will eventually disappear from the archive, correct? And same thing for the apps in /Applications which have been deleted (and not replaced) since the last capture of /Applications, correct?

Correct and correct!
Steven J Gold wrote:Will they remain forever (or at least until replaced by another one-time capture?)

This is the correct answer. They will remain forever ... unless recaptured or manually deleted.

or will they be deleted from the archive by a future rolling merge?

They can never be deleted or replaced by capturing something else.

The fundamental, conceptual, model of QRecall is that each layer captures just what has changed in the captured items. And merging layers containing the same items combines them into a single set of changes, essentially creating a single layer as if the earlier captures had not happened.

In the case of an archive with two non-overlapping items, there's nothing to merge[1]. So recapturing your home folder will never combine with or replace any data captured in your /Applications folder, or items captured from another volume, or items captured by another owner. Those items are in other branches of the archive.

Let us know if that helps clarify the concept.

Footnote [1]: That's not, technically, true although it's conceptually true. If you capture /Applications you have a layer with /Applications. Then if you capture your home folder you now have a second layer with just your home folder. When these layers are merged, you end up with a single layer containing both /Applications and your home folder, just as if you had performed a single capture of those two items. But since these items don't overlap, no items are combined. Again, the same thing happens when you merge layers with items captured from different volumes.
Johannes wrote:I am trying to understand the concept behind the stacks.

An archive and a stack are logically equivalent, but physically different.

In both, each layer represents the file data that changed since the previous layer.

In an archive, all of the data is stored together in one big pool. In a stack, the minimum data required to describe each layer is stored in individual "chunks" (be that files or data objects) which are physically isolated from one another.

This means the archive is efficient at tasks that require all of the information (capture, merge, etc.) while stacks are very efficient at copying and replacing individual layers with changes.

Too me it looks that it offers mainly an other level of redundancy.

That's exactly what it is. A stack is an efficient clone of an archive, organized in such a way that layers can be individually added and updated.

I have two scenarios where this might be of use, but I am not sure:

1) Instead of a file system backup of an archive to an other disk, I can now use a stack. Here the advantage seems clear: Instead of copying the whole file every time, stacks can update incrementally. Very handy if the backup location is online and bandwidth limited. Right?

Correct, and this is the primary use case of a stack. To have a (probably remote) copy of your archive that can be quickly and efficiently updated with new data as the archive grows and changes.

2) Instead of two independent archives on two disks I can now have an archive on one disk and the stack on an other. What's the advantage of stacks here?

A stack doesn't directly replace this scenario, but does support off-site swapping with three disks:

1: A primary drive with an archive that gets updated regularly.
2: A removable drive (A) containing a stack that duplicates the primary archive.
3: Another removable drive (B) containing a second stack that duplicates the primary archive.

Then your back strategy looks like this. Constantly keep the archive up-to-date with captured files. Occasionally update the stack (A) on the first removable drive from the archive. On a regular (typically weekly) schedule, take the first removable drive (A) off-site, pick up the second removable drive (B) and bring it back. Immediately update the second removable stack (B) with all accumulated changes in the archive. Repeat.

The worst likely disaster (i.e. fire) is you lose both the archive and the local removable stack drive. You'd then recover from the off-site stack. A more likely scenario is any one of the drives fail. If the archive drive files, simply restore it from the most recently updated stack. If a stack drive fails, simply replace it and create a new stack.

And one more question:
Is it planned to support stacks via FTP?

That is possible, and within the design, but so far (at least until today), no one has asked for it. We're currently concentrating on filesystem, AWS, AWS-compapible, DropBox, GoogleDrive, and iCloud based stacks. But adding FTP wouldn't be difficult. (We've also considered R/W optical media.)

I hope that helps.
Mark Gerber wrote:I'm pretty sure that my rolling merge from years ago maxed out at 2 or three years. So I think what probably happened was I must have activated the schedule without realizing it and a merge of some sort was performed after yesterday's capture.

You think correctly. The rolling merge has combined all of the history of Disk 1, Disk 2, and Disk 3 into a single layer, keeping only the last captured items in each volume.

However, these are not duplicates. The single layer contains all three volumes. The contents of those volume are separate from one another, and you'd have to look at the capture date of the volume (or any item in the volume) to tell how far back it goes.

I'm interpreting this new state as there is only one layer duplicated three times.

There is a single layer that contains three separate volumes.

A quick look in one of those disks' ~/Documents folder shows I have some files going back to The Early Days so I guess everything is flattened but safe. I imagine it's what I would have done anyway but would have preferred agonizing over the choice for a few days.

Correct and correct.

Given all this, should I delete Disks 1 and 2 as duplicates of Disk 3 and move forward with just Disks 3 and 4?

Given that your rolling merge only maintains about three years of history, there's no point in trying to combine Disk 1, 2, or 3 with anything. As soon as you combine these volumes with Disk 4, the next merge will delete them (because they're too old).

I suggest you simply select Disk 1, Disk 2, and Disk 3 and delete them (Archive >Delete Items). It will be a lot faster than combining them, only to have the next merge discard them.

I should clarify that all four disks I'm referring to are listed under one owner/volume. I can only select one at at time and the Archive>Combine Items… menu is grayed out.

An archive contains owners. Owners contain volumes (disks). Volumes contain files and folders. If you open an owner in the browser and are looking at a list of volumes, you should absolutely be able to select more than one using Shift+click or Command+Click. Try a different view (list view, for example) if you're having problems.

I suggest switching to list view, select Disk 1, then while holding the Command key, click-select Disk 2 and Disk 3. Now you can select Archive > Delete Items to remove all three at once.

And then, as you wrote, my next compact will drastically reduce the size of the archive and reduce the time spent during capture/merge actions. Hope I have that right.

Absolutely correct, with the possible exception of excluding the word "drastically".

QRecall's data de-duplication means that the same file captured in Disk 4, Disk 3, Disk 2, and Disk 1 is only stored once, and deleting three of those references doesn't remove that data. It does reduce the meta data for the other three references, but metadata records are typically only about 1%-2% of an archive. Capture won't go much faster because capture is only comparing new files with the volume being recaptured (Disk 4). The contents of Disk 3, 2, and 1 are irrelevant and are not consulted during the recapture.

The actions that will be substantially faster are merge, compact, and verify, since you will have removed hundreds of millions of file and folder records that no longer need to be considered.
Mark,

Glad to hear you're doing a little spring cleaning.

Your current situation isn't grossly inefficient because all four volumes still share the same data. So any file captured in Disk 4 share its data with the same files in Disk 3, 2, and 1.

But it is a little inefficient. Having multiple copies of the same volume means there's an initial layer for each volume with a complete copy of your entire directory system (essentially all of the file and folder metadata). But since metadata is typically only 1% to 2% of an archive, this isn't a big deal.

It also prevents the merge from discarding the oldest versions of files in Disks 1, 2, and 3 because they're not part of the Disk 4 history.

And finally, it makes it hard to find a really old file because you have to search for it across four different volumes.

Since all four volumes are essentially the same volume, I would recommend combining them. Then the rolling merge, compact, and search will all work the way they're supposed to.

But before you begin, I would examine your rolling merge and see how far into the past it preserves layers. If it's 5 years or less, there's no point in keeping Disk 1 and 2 at all, since the next rolling merge will merge those layers with the layers of Disk 3 (essentially removing the older volumes). If this is the case, I'd recommend you start by deleting volumes Disk 1 and Disk 2 from the archive, and then combine the remaining Disk 3 and Disk 4.

If your rolling merge does go back more than 5 years, and you really want to keep all of that history, then just merge all of the volumes. And it's important to merge them all at once; don't to it piecemeal or you may not be able to merge some of them. Navigate to the root of the archive, select all of the volumes, and choose Archive > Combine Items.

If the combine is successful (there are some obscure technical reasons why it can't be done), the history of all four volumes will be combined and you'll be left with a single volume (Disk 4) with a unified history. If the volumes can't be combined, try combining just Disk 2 through 4, or just 3 and 4, and then decide how long you want to keep the history in Disk 1 (and 2).

In the end, your storage should be slightly more efficient and the next compact action will probably reduce the save of your archive.

Good luck!

Paul Sheraton wrote:will qRecall do a backup and restore my whole MacOS including all settings and configurations? (like TimeMachine does).

Absolutely.

QRecall lets you choose exactly how much you capture. This can include (or exclude) all of your user and/or system settings.

If you capture the entire volume, all modifiable system files along with all users, their documents and preferences will be captured.

Modern macOS (10.15 "Catalina" and later) installations add a little bit of a wrinkle to this. A macOS startup volume is now two volumes: an immutable image of the macOS system software called the "System" volume, and a companion mutable volume called the "Data" volume which stores all of your user data and everything that's modifiable.

When you capture a startup volume, QRecall actually captures just the "Data" volume. The "System" volume is cryptographically signed by Apple and only be restored by the Apple installer. So there's no point in capturing it, or trying to restore it.

To recover a startup volume, create a new APFS volume, restore the captured volume using QRecall (it will now contain just the "Data" portion), and then install macOS on that volume using the macOS installer (which can be done directly from the Internet using recovery mode). The installer will split the volume, install the "System" volume, and make the whole thing bootable again.

Also why doesn't this forum use SSL ?

This is because our server was designed and engineered long before HTTS became common/ubiquitous/preferred/required. We're in the process of transitioning to a new set of servers this year, so the website, forums, diagnostic report tracking, accounts management, and sales will all be running over HTTS.
Jeffery,

Sorry to hear you're having problems.

Thank you for sending a diagnostic report. This helps immensely.

It appears from your report that you have relocated your home folder to a different volume. This greatly complicates the QRecall installation. Privileged executable, system daemons, XPC services, and some user agents must reside on the startup volume. For security reasons, macOS won't launch some executables if they reside on a non-startup volume.

To work around this limitation, QRecall relocates some of its components to a special system directory when the user's home folder is not on the startup volume. Specifically, it creates this path:


where "504" is the UID of the user installing QRecall.

Somehow, your installation has gone sidewise because this directory has the wrong owner:

The "504" directory is owned by user 505, not 504, and QRecall (running as user 504) can't see or modify the components in this directory. So the installer steps fail, and anything that is installed there won't launch.

The other problem logged is "Failed to install Privilege Elevation service", which indicates you have a mis-installed privileged helper. This sometimes happens in macOS, where you go to install a privileged service, macOS prompts for administrator's authorization, and then something goes wrong. The helper doesn't get installed, but macOS won't prompt to replace it, and you're stuck with a non-functional installation.

To rectify both of these, I would suggest the following:

  • Delete these paths:

  • Restart your system

  • Launch QRecall and let it reinstall itself again


  • This will eliminate the mis-installed support folder for user 504 and manually un-install the privileged helper.

    Please send a diagnostic report afterwards so I can verify the correct installation (and look for possible QRecall bugs in how your situation was handled).
    Bruce Giles wrote:I did get QRecall 3 beta 78 running on my Mac at work.

    Bruce,

    I'm assuming you didn't see my email about your earlier problems. I included a link to a pre-release build of QRecall 3.0b79 for you to try. I was particularly interested in how 3.0b79 dealt with your 2.x archives. I expect 3.0b78 to have the same problem(s) as 3.0b76, in that regard.

    I'll send the email again (for the details), but you can download the pre-release 3.0b79 and try it. I was hoping on evaluating this change to include, or exclude, it from the next release.

    In the mean time, I'll review the diagnostic reports you just sent.
    Promises of an iCloud Drive compatible stack container were premature.

    Over the past few weeks, we've encountered some technical difficulties in reliably using iCloud Drive as a stack container. We are committed to getting this to work, but it will take some more time.

    We've tested the existing Document stack in Dropbox and Google Drive and it appears to be working just fine.

    For Dropbox:

  • Install the Dropbox extension.

  • In Dropbox ? Preferences ? Sync, turn on Smart Sync (Save hard drive space automatically).

  • Create a Document stack container and store it anywhere in the Dropbox location.


  • For Google Drive:

  • Install the Google Drive extension.

  • In Google Drive ? Preferences ? Google Drive, turn on Stream Files.

  • Create a Document stack container and store it anywhere on the Google Drive volume.


  • These are not ideal solutions, but they work. We will probably develop Dropbox and Google Drive specific containers, but that's further down the to-do list.
    Bruce, that is quite the story of woe! Sorry to hear you're having such a rough time.

    First, please send a diagnostic report. I'm particularly interested in your crash logs, because you're getting a lot of crashes for some reason (both versions!) and I think we should start there.

    M Wang wrote:Reading this gives me hope that maybe Google Drive or OneDrive will be supported someday?

    QRecall stacks might be compatible with Google Drive or OneDrive already, I just haven't tested those yet.

    Basically, QRecall stacks should work on any cloud drive service that (1) automatically uploads items stored on the local "drive" to the cloud, (2) replaces the local copy with a placeholder, and (3) transparently downloads the original item again whenever that file is accessed.

    Most cloud drives work this way.

    The problem with iCloud (which may, or may not, be an issue with Google or OneDrive—again, haven't tested them yet) is that iCloud treats the entire stack container as a single document. So simply trying to check the status of the stack container document ends up re-downloading all of its individual parts, which completely defeats the purpose of using a cloud document.

    The next update to QRecall 3 will address this.

    I have been running v3 betas on my system for a few months without issues, but have yet to try the "stacks" feature, mainly because it seems to be something useful only for backing up to the cloud. Am I right? Or is it something worth doing even when backing up locally?

    Stack document can also be used for local redundancy. Specifically, a lot of users capture their documents to an archive. They then either sync or copy that archive to a second drive which is taken off-site, or they rotate between a set of archives (at least one of which is always off-site).

    Stacks can simplify this off-site drive rotation, and is much more efficient at doing so.
    Pierre,

    Thanks for the diagnostic report. There's defiantly a bug here, as the log reports that there are aliases to old archives and QRecall is deleting them ... but they never get deleted. I'll add this to the bug list.

    Did manually deleting the alias files in ~/Library/Preferences/QRecall/Recent Captures folder solve your problem?
    Pierre,

    Sorry for the inconvenience.

    This message occurs when a capture updates the list of recently captured archives, but finds a broken link to an archive that's been deleted or moved. It should only happen once.

    Start by sending a diagnostic report (in the QRecall app, choose Help > Send Report). There might be some clue as to why the link to your old archive isn't getting automatically removed.

    To stop this message, open your (home folder)/Library/Preferences/QRecall/Recent Captures folder. Select of the files in that folder and trash them. New links will be automatically recreated the next time each archive is captured.

    These links are actually only used for the Reveal in Archive service, so unless you use that feature before the next capture you won't notice anything different.

    Finally, you might also have archive status information too. In the QRecall app, choose Window > Status. Right click on the obsolete archive and choose "Forget". Alternatively, you can just wait; obsolete status information is discarded automatically after 7 weeks of inactivity.
    Steven J wrote:1. Any plans to allow iCloud to be an stack destination? i.e., create an stack container on iCloud Drive ?

    That should be possible within the next couple of weeks.

    2. Does the beta respect file attributes set on an individual file/folder from version 2 via the Services menu?

    Capture preferences have not changed, and both versions 2 and 3 support the same set of options.

    3. Can the Beta run along with version 2, or does version 2 need to be removed in order to test the beta? (I notice many of the component names are the same).

    Sadly, no. Two versions of QRecall cannot coexist. An archive you use with version 3 will be upgraded and will not be backwards compatible with version 2. If you save the original archive, you reserve the option of uninstalling version 3 and reinstalling version 2.

    4. Can the archive destination be either HFS or APFS? Any advantage using one over the other?

    There are some advantages to using APFS volumes for your archive. QRecall now takes advantage of file cloning, so actions like capture start and finish faster, and there's less chance the archive can get left in a state that requires it to be repaired. The disadvantage is that APFS volumes tend to get more fragmented (hurting performance), which is something we're working to address.
    Steven M. Alper wrote:But my point is that I don't think QRecall should ever cause a disk to unexpectedly eject, no matter what's going on.

    That's an excellent point, and I can assure you QRecall would never do such a thing. In fact, QRecall can't do such a thing even if it wanted to—and it would never want to.

    The most an application can/should do is request that a volume be unmounted. QRecall only does this for actions on external volumes following the successful execution of an action. It makes no such requests at any other time.

    Furthermore, this is a request, not a command. The filesystem is the entity that unmounts a volume, and will only do so after all files on that volume have been closed. So any open files (and repair will have multiple files open until its done) will prevent any software from unmounting the volume[1].

    Finally, there's no way to unmount a volume (unless the drive can be physically ejected) in such a way that it can't be remounted again.

    My conclusion: there's something wrong with that volume/drive. I'd defiantly perform a volume repair to look for structural damage (which is more likely the fuller a volume gets). But the fact that the volume can't be mounted again makes me suspect hardware issues, which are often bus related. Try switching communication paths (i.e. switch from FireWire to USB), or change drive enclosures, if you have that option.

    Steven M. Alper wrote:I will probably move one of the damaged archives to another drive and see if I can repair it there ...

    This is the most sensible approach.

    An alternative, which is also an experiment, would be to use QRecall's "Recover" mode when doing the repair. This mode only reads data from the source volume, and writes the reconstructed archive to a different volume. The reconstructed archive will have no empty space, although it's possible that it could be made smaller still by compacting again. This is much faster than copying an entire archive and then repairing the copy, because the repair and copy happen in a single pass.

    The experiment is that if the volume spontaneously ejects simply while being read, there's definitely a hardware problem here.

    [1] There is a "force eject" function in the OS that can force a volume to eject with open files. QRecall certainly never calls this, but some other software might. But it's far more likely volume structure corruption or hardware is the cause.
     
    Forum Index » Profile for James Bucanek » Messages posted by James Bucanek
    Go to:   
    Powered by JForum 2.1.8 © JForum Team