Message |
|
Dawn to Dusk Software is pleased to announce the (belated) start of the QRecall 3.0 beta test program. To get started, go to the QRecall Download page. If you already have a permanent or trial identity key, you can continue to use that. If don't have a permanent key, or your trial key has expired, click the button on the download page to obtain a free Beta Identity Key that will be valid for the entire beta test period. The theme of QRecall 3 is "to the cloud and beyond." The user interface should be familiar, but underneath the hood is a massive rewrite of the code for improved performance, new technologies (Apple Silicon!), Swift integration and more. The biggest new feature is Stacks, which you can start reading about in these forum posts: About StacksCreating StacksUpdating StacksRecovering From a StackMiscellaneousAs always, feedback is welcome and encouraged.
|
|
|
Slice Details In the slices window, double-click on a slice to get pop-up windows with some additional details. If there are issues with the slice, that reason will be displayed. Partial Slice Transfers Copying an archive layer to a stack is incremental and can be interrupted. If you stop a transfer before it's finished, the slice that was in the process of being copied will appear as partially transferred when you return to the slices window. Restarting the same copy will pick up right where it left off. You may also find it useful to set a time limit on your cascade action, letting it run for say no more than 9 hours at a time. Any unfinished slice transfer will resume the next time the cascade action is started. If you manuanlly start a different copy, one that does not include the partially transferred slice, that partial slice is discarded. Verifying stacks You can manually verify a stack's structure and/or data. You can also create a verify stack action, that can be scheduled to run periodically. To verify a stack manually, show the stack's configuration pane in the archive's right sidebar. To the right of the stack's name, click the action button and choose one of the Verify commands. There are three levels:
Quick: Test that the stack is reachable and all parts are present.
Structure: After performing a quick test, all metadata (file, directory, layer) records are read and compared with those in the archive to make sure they agree. (This typically requires reading about 1-2% of the stack's content.)
Full: After the structure test completes, QRecall transfers all remaining data records and makes sure they are readable and match the data in the archive, where applicable. Note that for AWS S3 and compatible cloud object containers, you should never have to perform a full verify. (In the final version of QRecall this option might not be available for those stack types.) Deleting Stack Slices Just as you can delete recent layers from an archive, you can delete recent layers from a stack. Open the archive, open the slices window, and choose the "Delete slices in" action. Pick the layers in the stack you want discarded and click the Delete button. Manual slice copy There are two "manual" stack actions: "Copy to" and "Copy from". These actions will let you copy any slice from the archive to the stack, or from the stack to the archive, within the rules of slice copying (of course). Use this to update a slice that doesn't want to (because it wouldn't save enough space, for example), or recover the details of a merged layer that hasn't been updated yet. Using (the) Force Hold down the option key and "Copy to" and "Copy from" become the "Force copy to" and "Force copy from" actions. These are the same as the basic copy actions, but disable all safeguards. It will let you copy any slice, regardless of whether QRecall thinks this is a good idea or not. Please exercise discretion when using these actions. Compression All records copied to a stack are compressed. Recovered slices (copied from a stack back to an archive), remain compressed. So recovered layers/archives may occuy less disk space than they did originally. Deleting Stacks It's easy to delete a stack. In the archive's right sidebar, show the stacks pane. To the right of each stack name is an action menu. Click it and choose Delete Stack.... The stack container will be erased and the stack's configuration will be removed from the archive. Disconnecting Stacks There are several commands useful in dealing with situations where the stack's container is lost or relocated. The disconnect stack command will forget the stack configuration from an archive, without destroying its container. In the action menu, hold down the Option and Control keys and the Delete Stack command will change to Disconnect. This command removes the stack connection from the archive, but leaves the stack container intact. It is your reponsibility to delete, relocate, or repurpose the container. But as long as it still exists, you can use the stack container to recover the stack or reconnect the archive to the stack again at some later date. Reconnecting Stacks If a stack's connection with its container is broken (for example, the container has moved to a different server), you can restore the connection using the Reconnect Stack command. In the stack sidebar, hold down the Option key and choose Reconnect Stack... from the action menu. Fill in the connection details, exactly as if you were recovering from a stack. If the container agrees with the archive, the stack's connection is restored. Reconnect a Forgotten Stack If the archive no longer has a connection to the stack, it can still be reconnected. With the archive window open, hold down the Option key and the New > Stack command turns into New > Reconnect Stack. Again, choose the stack's type and fill in the details of the container's location. If successful, the stack will be added back to the archive. Stack Container Format Currently, the format and struture of all stack containers are interchangable, if not identical. For example, if you have a stack on AWS S3 and want to relocate it to a different cloud storage servce, you can (1) disconnect the archive from the AWS stack continer, (2) manually transfer the contents of the stack container package from AWS to a third-party S3 bucket, (3) reconnect the archive to the stack, but this time providing the S3-compatible bucket credentials instead.
|
|
|
Stacks can be used to recover details erased in a merged layer, fix a repaired layer, or recover an entire archive. Recovering Details and Healing Damaged Layers In the "Updating a Stack" post there was a lot about when merged layers in an archive replace their original layers the stack. But if those layers haven't been replaced yet, they can be used to restore those details at any point in the future. Also, if an archive becomes damaged somehow, it may now contain repaired layers. If those layers had been previously copied to the stack, those layers can be recovered simply by replacing the repaired layer(s) with equivalent layers in the stack. Both of these tasks are accomplished in the slices window. Open the archive, and then choose Layers > Stack Slices?. (Note that the repair action automatically opens the slices window if the archive has at least one stack and the repair detected damaged/missing data in a layer.) Select the Recover from action. Select the slices you want to replace in the archive. It can be layers you want to "un-merge" or repaired layers. you can also restore layers that have been deleted from the archive. Recovering an Archive When the disaster is a little bigger than a repaired layer or two, the entire archive can be recovered from the stack?or at least as much of the archive that has been synchronized with the stack. Choose File > Recover From Stack? in the QRecall application. You begin by specifying the type and location of the stack container, exactly as you did when you created the stack. (I trust you saved all of that information somewhere safe.) But instead of prompting you for the name of a new stack container, it will ask you to select an existing stack. You will then be prompted for the name and location of the recovered archive. An empty archive will be created. It will have the same identity and will already be connected to the stack. It will also restore all of your archive's settings. The slices window will automatically open, and the action set to "Recover from". When you created the stack, the archive had layers and the stack was empty. This time, the archive is empty and stack has layers. Select some or all of the layers to recover and click the Recover button. WARNING: Archives and stacks all have a unique identity code. This is assigned when the archive and stack are created and is used to make ensure the stack belongs to the archive. If you duplicate an archive, you create a situation where there are two archive documents with the same identity. If QRecall ever detects this, it will spontaneously reassign the identity of one of the archives. This can be an issue when recovering an archive because, if the original archive still exists (say you just want to test out this new recovery process), QRecall may spontaneously reassign it a new identity. When this happens, one of the archives will be disconnected from the stack, and it might not be the archive you want. If you want to test the recovery process, first rename the old archive, or make sure it's off-line during your tests. In the worse case scenario, you may have to delete one of the archives, recover from the stack again, or delete the stack and start over.
|
|
|
A stack is updated using the cascade action. This is the most common stack action, and one of the few you can automate. A cascade performs two tasks, as needed:
New archive layers that are not yet in the stack are appended to the stack. Each layer become a new slice.
A merged layer in the archive replaces multiple, equivalent, layers in the stack. This is referred to as "updating a slice." To get started, you'll want to transfer the initial set of existing layers to the stack. Open the archive and choose Layers > Stack Slices from the menu. This is the slices window. All direct interaction with your stack is performed through this window. The action to perform is selected at the top of the window, and should default to "Cascade to". If it isn't, select the cascade action now. To the right of the action is the destination stack. If you have more than one stack, select the stack you want to work with now. In the middle is the list of slices (matched groups of layers) you can work with. Slices are enabled or disabled depending on what action is selected. The slices that are recommended for updating will be pre-selected automatically. In the beginning, all you'll see are the layers in your archive. The stack side (on the right) is empty. If you want to transfer all of your layers to the new stack, select them and click Update. And then be patient... You can also select a different set of layers to update. Note that when you select (or deselect) a slice in the list, QRecall may select (or deselect) other slices too. There are rules about which slices can be transferred, and in what order, and the window will enforce those rules. Automating updates Updates are automated using the new cascade action. Create the action and then specify the archive and stack you want to update. There are no options in the action; the slices to be transferred are chosen automatically based on those stack settings you set in the stack's configuration. You can schedule the action like any other action. Run it twice a day or once a month. It's your choice. About those settings.... Stack settings can temper when (or if) slices are transferred. This can reduce the amount of I/O, with the trade-off that the stack isn't as up-to-date as it could be. Unlike an archive, which is assumed to be directly accessible and can be randomly modified, stacks are organized into clusters of data objects that are written once and never change. They may later be deleted, but that is their entire lifecycle. This design means stacks will work efficiently with object storage services (AWS S3), filesystems that copy whole files (WebDAV), file/drive synchronization software (rsync), and cloud drives. Another consideration is that stack containers may be expensive to access. This includes storage fees and metered I/O. It also might be expensive in terms of time (slow access). And there may be other considerations, such as data caps. Finally, keep in mind that every update to a stack is a single, often large, transfer operation. Layers in stack are never randomly modified; they are replaced in their entirety. Most of the settings offer ways to reduce how often slices are transferred, which reduces the amount of I/O that has to be performed to keep the stack in sync with the archive. They can also curtail how current the stack is. Copy After [ 8] [Days] The "Copy after" settings prohibits newly captured layers from being appended to the stack for a period of time. As an example, let's take an archive that captures files hourly during the day. At the end of a week, there could be a hundreds of small layers. But let's say you have a rolling merge action that, after a week, merges those hourly layers in a single layer for each day. Setting a "Copy after 8 days" preference would wait for those small layers to be merged, before uploading a single (day) layer to the stack. Rather than uploading hundreds of small layers, only to replace then again in a week. The disadvantage is that your stack will always be at least 7 days behind your archive. Preserve for [ 2] [Months] Update [saves at least 30%] The next two settings reduce how often slices are updated. Once a slice has been added, these settings delay it from being replaced with a merged slice either by time or relative size. The "Preserve for" setting prevents it from being replaced by an update for a specific period of time. Using the previous example, it would allow hourly layers to be added, but not replace them until the those layers had been merged into far more compact daily, weekly, or even monthly layers. The stack is always up-to-date, but the work needed to consolidate the layers is postponed so it's more efficient. Similarly, the "Update saves" setting blocks a slice from being updated unless it is expected to reduce the size of the slice in the stack by a certain percentage. Using a setting of "Update saves at least 30%," individual hourly stack layers may persist in the stack indefinitely. But once the archive has an equivalent layer that is at least 30% smaller, all of those stack layers will be replaced. Once replaced, it won't be replaced again until there's a newly merged layer that is, again, at least 30% smaller than the first replacement, and so on. Save Forever... Finally, note that there's an Update never option. With this setting, slices in the stack are never (automatically) replaced. This would be for archives that capture sensitive document changes which, for legal or regulatory reasons, must be maintained forever. You can continue to merge and reuse space in the archive for new captures, but the layers and stack detail will never be reduced and will grow forever. Well, not *forever*. Years from now you may reach the limits of the archive index and have to start a new stack; but it's effectively forever. Cascade strategies On one extreme, if your stack container is a document on a fast local drive or server, set all of these settings to their minimum and run the cascade action regularly. At the other extreme, you may delay the transfer of new slices to the stack for a month or more, choosing only to upload weekly layers. In between these extremes, consider your bandwidth costs and your storage costs. If storage is expensive but I/O is cheap, set the "Update always" to keep your stack size a its minimum. If I/O is expensive or slow, set "Update saves at least 50%" or more, to minimize how often slices are replaced. Manual Updates These settings only apply to automatic cascade actions. These are the slices that QRecall will suggest when you open the slices window, and the set of slices the cascade action will copy automatically. You can always open the slices window and update whatever slices you choose (within the rules, of course).
|
|
|
The first step in creating a stack is deciding where it will reside. This location will be the stack's container. Open an archive and choose File > New > Stack from the menu. In the new stack dialog, choose the type of container you want to use. In the initial beta, there are three container types: ? A filesystem document ? An Amazon Web Services S3 bucket ? A bucket on a third-party S3-compatible service Choose the type via the pop-up menu. If you selected a document container, there's nothing more to configure at this point. If you selected an AWS S3 container, you'll need to supply your server region, account identifier, the account's secret key (which Amazon supplied to you when you created the bucket), and finally the name of the bucket. If you're using a third-party S3 container, you'll need to supply much of the same information as an AWS account along with an endpoint. If your services uses the standard Amazon convention for an S3 URL (i.e. https://s3.some-region-1.some-server.com/), then enter the region (some-region-1) and the server's domain name (some-server.com) in the two fields. If not, select the "endpoint" option and enter the entire base URL (i.e. "https://data-collection.oceanic-sci.princeton.edu/") That was the hard part. Now click the Next button to pick the name of your stack. In the case of a document stack, this will determine both its name and location. For S3 stacks, it will query the bucket, list the names of any existing stacks, and then prompt you to name the new one. Click the Create button and, if all goes well, a new stack will be created and connected to your archive. In the archive toolbar, click the Stacks view in the right sidebar, and then expand the details of your new stack; or click the action button to the right of the stack's title and choose "Edit in Separate Window?". A description of the stack's container is displayed. You can assign your stack a custom name that will appear in actions and the log. There are also additional settings, which may vary depending on the type of containter. You are now ready to seed your stack with layers from the archive. See the post "Updating a Stack" to get started, and for an explination of those other settings.
|
|
|
A stack is a efficient copy of your archive, stored nearby or far away. Stacks are designed to be incrementally synchronized with your archive. So as your archive changes; new layers are added and merged layers replaced (or not, your choice). A stack can later be used to restore detail, repair layers, or even recover the entire archive. Stack data is stored in a Stack Container. Curently there are (fundamentally) two container types: a writable filesytem or an Amazon Web Services (AWS) S3 cloud object storage bucket. Additional container types are being developed. A stack is bound to a single archive and can only exchange data with that one archive. An archive can have multiple stacks. But all of the stacks will be copies of the same archive. For example, an archive of extremely sensative data might have a local (filesystem) stack for immediate duplication of captured files, along with a second long-term stack maintained on a remote cloud data service that only gets updated once or twice a month. Stacks are organized into layers, just like your archive. A key concept of stacks is a "slice." A slice is a set of layers in the archive that paired with an equivelent set of layers in the stack. In this context, "equivelent" means those layers represent the same set of captured changes. In the beginning, when the layers in the archive are first transferred to the stack, there's a one-to-one pairing of layers, which each layer pair forming a single slice. As layers are merged, however, this relationship changes. For example, if you merge layers 3-5 in the archive, the new singe (merged) layer now represents the same captured data as layers 3-5 in the stack. In the above example, layer 30 in the archive can replace layers 31-33 in the stack. When that happens, the stack size is reduced by the same amount of storage receovered in the archive. But the reverse is also possible. Archive layer 30 can be replaced with stack layers 31-33, restoring the intermedaite changes that were lost during in the merge. Only whole slices can be transferred between the archive and the stack. This is an important concept, and the reason should be obvious. The single merged layer (30) in the archive represents the same set of item changes as layers 31-33 in the stack, just without the intermediate detail. Moving on... The post on creating stacks will show you how to seed the stack with the initial set of layers. Then read the post on updating stacks, and then the post on restoring slices and archives.
|
|
|
Not a problem. Those are files that can't be captured because of Mojave's security. Future versions of QRecall will exclude the ~/Library/Metadata/CoreSpotlight folder. But for now, simply add it to the archive's exclude list.
|
|
|
Bruce Giles wrote:Do you have any recommendations for what drives work best with QRecall?
As a rule, a good backup drive is one that's reliable, but not necessarily fast or expensive.
Buy a drive the comes with the longest manufacturer warranty you can find; 5 years is good. Drive manufacturers know how long their drives last.
You want decent throughput, but throughput is usually limited by your interface/connection, not the drive. So first pay attention to the speed of the interface (USB 3.0 vs USB 3.1, which is twice as fast). Only if the interface is substantially faster than the transfer speed of the drive, do you need to start worrying about the drive.
Physical hard drives are, by far, the most the most economical storage. SSD is a huge waste of money. (Although I regularly use SSDs for testing new solutions?because they are so blindly fast?and I have to say it's a sweet solution if you have money to burn.)
Backups will not benefit from a lot of on-drive cache, so don't waste money buying a big cache (or hybrid HD+SSD).
Backup drives generally don't benefit from fast seek times, but QRecall can be harder on them in that respect. Don't get a drive with a glacial seek time (15ms), but don't spend extra money getting a really fast seek time (4ms) either.
I don't like/trust SMR; I think it's largely a marketing gimmick to slap the absolute cheapest possible price on a drive. For a (tiny) bit of savings, it lowers the performance and reliability.
Are there any particular manufacturers or models you recommend, and are there ones to avoid?
I have leaned heavily towards Western Digital Red drives for my various RAID enclousures and have generally been happy with their reliability and performance. I have soured a bit on WD after their SMR Red debacle. But I would still recommend non-SMR Red drives for archival storage.
Do you prefer ready-to-use external drives (includes drive and case as a unit), or do you prefer to buy a bare drive and put it in an external case yourself?
I always buy the enclosure separately and install the drives I want. But when shopping for RAID units, the sellers often let you populate them with whatever drives they sell, so this is often a matter of semantics. Most of my QRecall needs are for performance and torture testing, so I work almost exclusively with fast external enclosures via Thunderbolt or eSATA. The performance of most NAS devices are slower due to the extra layers of interface (network) and complexity (usually an embedded Linux server). But sometimes the convenience overshadows those concerns. In fact, several of my personal computers have been backing up to a ( gasp) AirPort Extreme base station for years. Clearly a case of convenience over performance.
I'm looking for something in the 2 terabyte range and I would prioritize reliability and speed over cost.
That's a single drive solution these days, so you have lots of choices. (Which, itself, can be curse.)
|
|
|
David Cretney wrote:should I capture it to the same archive on an external drive or should I create a new QR Archive?
Your choice! As you observed, QRecall's data de-duplication means that capturing the new volume to your existing archive won't make it much bigger because most of the new volume is a copy of what's already been captured. The question is really, do you want to keep the history of changes from your old system? If so, then keep the archive. You'll end up with an archive that has two volumes, which you can then use the Combine Items command to stitch into a single volume so you keep an unbroken history of your files. If you really don't need that history, proceeding with a new archive makes for a clean start. Enjoy!
|
|
|
Steven Haver wrote:1) What would have been the ideal way to rename the archive? What I did: I renamed the archive in Finder so that the name would reflect both of our computers and then set up her machine to capture to it using the capture assistant. This all worked fine on her machine, but when I got back to my computer QRecall was lost. No big deal I thought, I can just update the name in each action. Who knows, it might have even updated itself to the new name had I not gone on to make mistakes 2 and 3!
You, mostly, had it right. Simply rename or move the archive. On any systems that already has an action for that archive, just open the (any one) action. If the bookmark is able to locate the renamed/relocated archive, it should happen automatically. If it doesn't, use the action icon next to the archive name to choose the renamed/relocated archive. When you close the action, QRecall will prompt you to save the changes, and will then find any other actions that use the same archive and ask if you want to update them as well.
2) I noticed in her capture settings that all of the excluded paths from my computer were listed. I thought "Oh, none of those paths will apply on her machine, so I'll just delete all of those." I wasn't expecting that deleting them there would also delete them from my capture settings on my machine. ? (I think I remember what all of them were, so no big deal).
Excluded items stored in the archive (settings) are global to all owners. As a rule, items on your startup volume will apply logically to every user's startup volume. So feel free to just combine all of the items you want excluded, from every system, into one long list. And it's OK to have item in the list that might not exist everywhere. So you can exclude "~/Documents/Final Reports" from your captures. On a different system that doesn't have a "Final Reports" folder, that excluded item is simply ignored. Now if you have a "Documents/Final Reports" folder that you want exclude from your captures, but your friend has an identical "Documents/Final Reports" folder that they need captured, then you have to switch to using local exclusions. This also applies to any global exclude items that are getting confused. The easiest local exclusions can be set up using the Capture Preferences service. Make sure the "Exclude Items Excluded by Capture Preferences" is turned on, then select the "Final Reports" folder on your system. In the Finder, either choose Finder > Services > QRecall Capture Preferences, or right-click and choose Services > QRecall Capture Preferences. In preferences window, exclude the item. Now your "Final Reports" folder will get excluded, but your friend's won't (because the folder on their system wasn't excluded). See QRecall Help > Guide > Preferences > Capture Preferences.
3) To take things further, after a successful capture of her machine to the archive I patted myself on the back and then decided that I would prefer for her machine to have a more stealthy installation. I only want her machine to capture. I don't ever want her machine to do any of the merge, compact, or verify actions?I'll do those from my computer. So I deleted the Merge, Compact, and Verify scheduled actions. But when I got back to my computer, I had also messed up all of my actions.
That doesn't make much sense (to me). Actions are stored locally in your QRecall preferences, and any change made to actions on one system shouldn't have any affect on another system. But other than that, you had the right idea. Regular maintenance only needs to be performed by one system, usually the machine the machine with the fastest I/O or resources to spare.
So my main questions are: How should I have done it? What would have been the ideal way to rename an archive and then add a new owner to capture to an existing archive?
You pretty much had the right idea, maybe with a few minor missteps, but nothing that couldn't be easily sorted.
Also, do you have any recommendations for settings I should select for the most stealthy install possible? I would love for her to almost never see QRecall or even know that it's there. It will just happily capture each night and then get out of the way.
My recommendation would be to go into the QRecall preferences > Monitor and set the following:
Turn off Show at Startup
Turn off Show when actions start
Turn ON Show in dock [Only when active]
Turn off Show in menu bar It's not completely stealthy, but I suggest leaving Show in dock on. If you turn it off, you'll need to arrange it so that a capture isn't running should someone shutdown or restart the system. With the dock icon turned off, QRecall can't interrupt the shutdown until the capture is finished, which means there's a slim chance the the shutdown will kill the capture action before it can stop, meaning a slim chance of damaging the archive which would require a repair. But that's admittedly a slim change of a slim chance, so you're free to ignore that recommendation.
Is there an easy way for me to check the last time she captured from my machine?
Not specifically. The status window will tell you the last time any item was captured to the archive (which includes both of your systems). Other than that, you might simply open the archive from time-to time and see if new layers have been added for the other owner. You might also consider leaving the "Action incomplete" notification turned on. This will post a notification if a capture fails. This isn't completely stealthy, but seems like a good idea to know if this is happening regularly.
|
|
|
Follow up for the forum: James eventually performed a manual uninstall of QRecall. These steps can be found in the Help under QRecall Help > Guide > Advanced > Uninstall in "the hard way" sidebar. He then restarted and reinstalled, and everything is running normally now.
|
|
|
James, The problem seems to be that none of QRecall's components are running. QRecall does a lot of its work in the background and with elevated privileges. To accomplish this, it installs a number of system daemons, user agents, and a privileged service. These all appear to have been installed, but none are running. Either the macOS has failed to start them (unlikely) or something is preventing their execution. My money is on anti-virus software. First step is to restart your system and see if they start on their own. Launch the Activity Monitor and search for QRecall. At a minimum, you should see QRecallScheduler and QRecallMonitor in the process list. If you have anti-virus software, that could be the problem. It's always a bit of voodoo figuring out how to get around it, but in general try white-listing the QRecall or QRecallHelper process(es). You many also need to white-list the contents of Archive document or disable scanning and quarantining on the volume containing the archive.
|
|
|
We're not aware of any compatibility issue with Big Sur that would prevent you from creating an archive. Let's start by getting a diagnostic report. Launch the QRecall application, then go to Help > Send Report.
|
|
|
The fix for this problem is in QRecall version 2.2.12. Available at a "Check for updates..." menu near you.
|
|
|
QRecall will attempt to auto-mount, and eject, the volume containing the archive. The capture assumes items to be captured are already reachable, and will simply ignore any that aren't. There are two (simple) strategies for capturing intermittently available items. One is to create a captured action triggered by a "Capture Item Volume Connects" event schedule. This will trigger the capture action to start as soon as the volume containing the item to capture is mounted. Another alternative is to schedule the action to run at regular intervals, but use a schedule condition to "Hold if no capture items". That will queue the action to start, but suspend it until the item(s) appear to be online. Both of these assume that the items are occasionally mounted by you or some other process. (If they're never mounted, it's assumed there are no changes to capture.) If you have volumes that never get mounted, you'll need to do some work. The simplest would be to write a prolog script that mounted the items to capture, then attach that script to the capture action. When the capture action starts, it will run the script, which should mount the item, and then let the capture proceed.
|
|
|
|
|