Message |
|
The first step in creating a stack is deciding where it will reside. This location will be the stack's container. Open an archive and choose File > New > Stack from the menu. In the new stack dialog, choose the type of container you want to use. In the initial beta, there are three container types: ? A filesystem document ? An Amazon Web Services S3 bucket ? A bucket on a third-party S3-compatible service Choose the type via the pop-up menu. If you selected a document container, there's nothing more to configure at this point. If you selected an AWS S3 container, you'll need to supply your server region, account identifier, the account's secret key (which Amazon supplied to you when you created the bucket), and finally the name of the bucket. If you're using a third-party S3 container, you'll need to supply much of the same information as an AWS account along with an endpoint. If your services uses the standard Amazon convention for an S3 URL (i.e. https://s3.some-region-1.some-server.com/), then enter the region (some-region-1) and the server's domain name (some-server.com) in the two fields. If not, select the "endpoint" option and enter the entire base URL (i.e. "https://data-collection.oceanic-sci.princeton.edu/") That was the hard part. Now click the Next button to pick the name of your stack. In the case of a document stack, this will determine both its name and location. For S3 stacks, it will query the bucket, list the names of any existing stacks, and then prompt you to name the new one. Click the Create button and, if all goes well, a new stack will be created and connected to your archive. In the archive toolbar, click the Stacks view in the right sidebar, and then expand the details of your new stack; or click the action button to the right of the stack's title and choose "Edit in Separate Window?". A description of the stack's container is displayed. You can assign your stack a custom name that will appear in actions and the log. There are also additional settings, which may vary depending on the type of containter. You are now ready to seed your stack with layers from the archive. See the post "Updating a Stack" to get started, and for an explination of those other settings.
|
|
|
A stack is a efficient copy of your archive, stored nearby or far away. Stacks are designed to be incrementally synchronized with your archive. So as your archive changes; new layers are added and merged layers replaced (or not, your choice). A stack can later be used to restore detail, repair layers, or even recover the entire archive. Stack data is stored in a Stack Container. Curently there are (fundamentally) two container types: a writable filesytem or an Amazon Web Services (AWS) S3 cloud object storage bucket. Additional container types are being developed. A stack is bound to a single archive and can only exchange data with that one archive. An archive can have multiple stacks. But all of the stacks will be copies of the same archive. For example, an archive of extremely sensative data might have a local (filesystem) stack for immediate duplication of captured files, along with a second long-term stack maintained on a remote cloud data service that only gets updated once or twice a month. Stacks are organized into layers, just like your archive. A key concept of stacks is a "slice." A slice is a set of layers in the archive that paired with an equivelent set of layers in the stack. In this context, "equivelent" means those layers represent the same set of captured changes. In the beginning, when the layers in the archive are first transferred to the stack, there's a one-to-one pairing of layers, which each layer pair forming a single slice. As layers are merged, however, this relationship changes. For example, if you merge layers 3-5 in the archive, the new singe (merged) layer now represents the same captured data as layers 3-5 in the stack. In the above example, layer 30 in the archive can replace layers 31-33 in the stack. When that happens, the stack size is reduced by the same amount of storage receovered in the archive. But the reverse is also possible. Archive layer 30 can be replaced with stack layers 31-33, restoring the intermedaite changes that were lost during in the merge. Only whole slices can be transferred between the archive and the stack. This is an important concept, and the reason should be obvious. The single merged layer (30) in the archive represents the same set of item changes as layers 31-33 in the stack, just without the intermediate detail. Moving on... The post on creating stacks will show you how to seed the stack with the initial set of layers. Then read the post on updating stacks, and then the post on restoring slices and archives.
|
|
|
Not a problem. Those are files that can't be captured because of Mojave's security. Future versions of QRecall will exclude the ~/Library/Metadata/CoreSpotlight folder. But for now, simply add it to the archive's exclude list.
|
|
|
Bruce Giles wrote:Do you have any recommendations for what drives work best with QRecall?
As a rule, a good backup drive is one that's reliable, but not necessarily fast or expensive.
Buy a drive the comes with the longest manufacturer warranty you can find; 5 years is good. Drive manufacturers know how long their drives last.
You want decent throughput, but throughput is usually limited by your interface/connection, not the drive. So first pay attention to the speed of the interface (USB 3.0 vs USB 3.1, which is twice as fast). Only if the interface is substantially faster than the transfer speed of the drive, do you need to start worrying about the drive.
Physical hard drives are, by far, the most the most economical storage. SSD is a huge waste of money. (Although I regularly use SSDs for testing new solutions?because they are so blindly fast?and I have to say it's a sweet solution if you have money to burn.)
Backups will not benefit from a lot of on-drive cache, so don't waste money buying a big cache (or hybrid HD+SSD).
Backup drives generally don't benefit from fast seek times, but QRecall can be harder on them in that respect. Don't get a drive with a glacial seek time (15ms), but don't spend extra money getting a really fast seek time (4ms) either.
I don't like/trust SMR; I think it's largely a marketing gimmick to slap the absolute cheapest possible price on a drive. For a (tiny) bit of savings, it lowers the performance and reliability.
Are there any particular manufacturers or models you recommend, and are there ones to avoid?
I have leaned heavily towards Western Digital Red drives for my various RAID enclousures and have generally been happy with their reliability and performance. I have soured a bit on WD after their SMR Red debacle. But I would still recommend non-SMR Red drives for archival storage.
Do you prefer ready-to-use external drives (includes drive and case as a unit), or do you prefer to buy a bare drive and put it in an external case yourself?
I always buy the enclosure separately and install the drives I want. But when shopping for RAID units, the sellers often let you populate them with whatever drives they sell, so this is often a matter of semantics. Most of my QRecall needs are for performance and torture testing, so I work almost exclusively with fast external enclosures via Thunderbolt or eSATA. The performance of most NAS devices are slower due to the extra layers of interface (network) and complexity (usually an embedded Linux server). But sometimes the convenience overshadows those concerns. In fact, several of my personal computers have been backing up to a ( gasp) AirPort Extreme base station for years. Clearly a case of convenience over performance.
I'm looking for something in the 2 terabyte range and I would prioritize reliability and speed over cost.
That's a single drive solution these days, so you have lots of choices. (Which, itself, can be curse.)
|
|
|
David Cretney wrote:should I capture it to the same archive on an external drive or should I create a new QR Archive?
Your choice! As you observed, QRecall's data de-duplication means that capturing the new volume to your existing archive won't make it much bigger because most of the new volume is a copy of what's already been captured. The question is really, do you want to keep the history of changes from your old system? If so, then keep the archive. You'll end up with an archive that has two volumes, which you can then use the Combine Items command to stitch into a single volume so you keep an unbroken history of your files. If you really don't need that history, proceeding with a new archive makes for a clean start. Enjoy!
|
|
|
Steven Haver wrote:1) What would have been the ideal way to rename the archive? What I did: I renamed the archive in Finder so that the name would reflect both of our computers and then set up her machine to capture to it using the capture assistant. This all worked fine on her machine, but when I got back to my computer QRecall was lost. No big deal I thought, I can just update the name in each action. Who knows, it might have even updated itself to the new name had I not gone on to make mistakes 2 and 3!
You, mostly, had it right. Simply rename or move the archive. On any systems that already has an action for that archive, just open the (any one) action. If the bookmark is able to locate the renamed/relocated archive, it should happen automatically. If it doesn't, use the action icon next to the archive name to choose the renamed/relocated archive. When you close the action, QRecall will prompt you to save the changes, and will then find any other actions that use the same archive and ask if you want to update them as well.
2) I noticed in her capture settings that all of the excluded paths from my computer were listed. I thought "Oh, none of those paths will apply on her machine, so I'll just delete all of those." I wasn't expecting that deleting them there would also delete them from my capture settings on my machine. ? (I think I remember what all of them were, so no big deal).
Excluded items stored in the archive (settings) are global to all owners. As a rule, items on your startup volume will apply logically to every user's startup volume. So feel free to just combine all of the items you want excluded, from every system, into one long list. And it's OK to have item in the list that might not exist everywhere. So you can exclude "~/Documents/Final Reports" from your captures. On a different system that doesn't have a "Final Reports" folder, that excluded item is simply ignored. Now if you have a "Documents/Final Reports" folder that you want exclude from your captures, but your friend has an identical "Documents/Final Reports" folder that they need captured, then you have to switch to using local exclusions. This also applies to any global exclude items that are getting confused. The easiest local exclusions can be set up using the Capture Preferences service. Make sure the "Exclude Items Excluded by Capture Preferences" is turned on, then select the "Final Reports" folder on your system. In the Finder, either choose Finder > Services > QRecall Capture Preferences, or right-click and choose Services > QRecall Capture Preferences. In preferences window, exclude the item. Now your "Final Reports" folder will get excluded, but your friend's won't (because the folder on their system wasn't excluded). See QRecall Help > Guide > Preferences > Capture Preferences.
3) To take things further, after a successful capture of her machine to the archive I patted myself on the back and then decided that I would prefer for her machine to have a more stealthy installation. I only want her machine to capture. I don't ever want her machine to do any of the merge, compact, or verify actions?I'll do those from my computer. So I deleted the Merge, Compact, and Verify scheduled actions. But when I got back to my computer, I had also messed up all of my actions.
That doesn't make much sense (to me). Actions are stored locally in your QRecall preferences, and any change made to actions on one system shouldn't have any affect on another system. But other than that, you had the right idea. Regular maintenance only needs to be performed by one system, usually the machine the machine with the fastest I/O or resources to spare.
So my main questions are: How should I have done it? What would have been the ideal way to rename an archive and then add a new owner to capture to an existing archive?
You pretty much had the right idea, maybe with a few minor missteps, but nothing that couldn't be easily sorted.
Also, do you have any recommendations for settings I should select for the most stealthy install possible? I would love for her to almost never see QRecall or even know that it's there. It will just happily capture each night and then get out of the way.
My recommendation would be to go into the QRecall preferences > Monitor and set the following:
Turn off Show at Startup
Turn off Show when actions start
Turn ON Show in dock [Only when active]
Turn off Show in menu bar It's not completely stealthy, but I suggest leaving Show in dock on. If you turn it off, you'll need to arrange it so that a capture isn't running should someone shutdown or restart the system. With the dock icon turned off, QRecall can't interrupt the shutdown until the capture is finished, which means there's a slim chance the the shutdown will kill the capture action before it can stop, meaning a slim chance of damaging the archive which would require a repair. But that's admittedly a slim change of a slim chance, so you're free to ignore that recommendation.
Is there an easy way for me to check the last time she captured from my machine?
Not specifically. The status window will tell you the last time any item was captured to the archive (which includes both of your systems). Other than that, you might simply open the archive from time-to time and see if new layers have been added for the other owner. You might also consider leaving the "Action incomplete" notification turned on. This will post a notification if a capture fails. This isn't completely stealthy, but seems like a good idea to know if this is happening regularly.
|
|
|
Follow up for the forum: James eventually performed a manual uninstall of QRecall. These steps can be found in the Help under QRecall Help > Guide > Advanced > Uninstall in "the hard way" sidebar. He then restarted and reinstalled, and everything is running normally now.
|
|
|
James, The problem seems to be that none of QRecall's components are running. QRecall does a lot of its work in the background and with elevated privileges. To accomplish this, it installs a number of system daemons, user agents, and a privileged service. These all appear to have been installed, but none are running. Either the macOS has failed to start them (unlikely) or something is preventing their execution. My money is on anti-virus software. First step is to restart your system and see if they start on their own. Launch the Activity Monitor and search for QRecall. At a minimum, you should see QRecallScheduler and QRecallMonitor in the process list. If you have anti-virus software, that could be the problem. It's always a bit of voodoo figuring out how to get around it, but in general try white-listing the QRecall or QRecallHelper process(es). You many also need to white-list the contents of Archive document or disable scanning and quarantining on the volume containing the archive.
|
|
|
We're not aware of any compatibility issue with Big Sur that would prevent you from creating an archive. Let's start by getting a diagnostic report. Launch the QRecall application, then go to Help > Send Report.
|
|
|
The fix for this problem is in QRecall version 2.2.12. Available at a "Check for updates..." menu near you.
|
|
|
QRecall will attempt to auto-mount, and eject, the volume containing the archive. The capture assumes items to be captured are already reachable, and will simply ignore any that aren't. There are two (simple) strategies for capturing intermittently available items. One is to create a captured action triggered by a "Capture Item Volume Connects" event schedule. This will trigger the capture action to start as soon as the volume containing the item to capture is mounted. Another alternative is to schedule the action to run at regular intervals, but use a schedule condition to "Hold if no capture items". That will queue the action to start, but suspend it until the item(s) appear to be online. Both of these assume that the items are occasionally mounted by you or some other process. (If they're never mounted, it's assumed there are no changes to capture.) If you have volumes that never get mounted, you'll need to do some work. The simplest would be to write a prolog script that mounted the items to capture, then attach that script to the capture action. When the capture action starts, it will run the script, which should mount the item, and then let the capture proceed.
|
|
|
Chris Caouette wrote:Part 1: What I am wondering is I'd love to have an Archive called "Archived Projects" but those projects can come from different drives. Rather than place them onto another drive first I'd prefer, if possible, to place them into a QRecall archive under a single folder.
QRecall can capture any arbitrary set of items (files, folder, entire volumes) in a single capture action. The files and folders don't need to be on the same volume. You can also just do them manually if you don't need to automatically capture changes. When you're ready to archive a project, just drag it into the archive's browser window. Or use the Services extension right in the finder: Control/Right+click Services > Capture to QRecall Archive. If you want them to be neatly organized in a single folder of the archive, that will take a little fiddling because the archive is organized around where the original item actually was. But you could create a folder just for archiving projects, and when you ready to archive one just put a copy in there and capture it. Once captured, you can discard that copy of the project.
Part 2: I have been keeping duplicate backups (backup of a backup). If I use Carbon Copy Cloner to keep copies of QRecall archives in another location, when the QRecall archive changes, will it just be an entirely one large new file that gets copied, or would CCC potentially only see the changes? Not sure you could make an Archive of an Archive. The example here is I have an archive called "Projects" on a networked drive. If I want to make an overnight update of that on another drive is it going to copy the entire file or just parts?
It will copy the entire archive (or at least most of it). QRecall 3.0 is adding intelligent and efficient archives of archives, which avoids those problem and is specifically designed to play nice with file synchronization. Look for a beta next month.
|
|
|
My suspicion is that actions running as the user are updating the status just fine. But the capture action?that runs as root?is not. I'll dig into this and see what the problem is. I'll post again here when there's a fix.
|
|
|
Steve,
Thanks for sending a diagnostic report.
Your "Status Window" command is greyed out because you don't have any archive status files in your ~/Library/Preferences/QRecall/Status folder. These status files are (or should be) automatically updated by capture, compact, verify, and various repair actions.
But it's obvious you've been running capture actions, yet this folder is still empty. I (obviously) suspect a bug. I'm just not sure where, as this mechanism has been working for many years.
When a capture ends, it writes a summary of the archive's status to a small "status.plist" file inside the archive package and writes a copy inside QRecall's Preferences folder. So let's start by teasing out what, and what isn't, happening when it should.
Let me first ask you locate a recently captured archive in the Finder, right/control-click on it, and choose the "Show Package Contents" command. Inside the package see if you have a "status.plist" and let me know what the last modified date and time are.
Secondly, run a verify on any of your archives. Afterwards, see if the status window becomes enabled.
Finally, send another diagnostic report after you've done those things. That should give me enough information to figure out what the problem is.
Thanks!
|
|
|
Steven,
In general, you've got it in the right order: capture, merge, compact, verify.
Merge has nothing to do until something is captured and time has passed. Compact has nothing to do until something is merged (or manually deleted). Verify always verifies everything, but will be slightly faster after a merge and compact because there's less data to verify.
Now, if your goal is for your computer to spend less time maintaining the archive, there are some techniques for running the big actions less often.
Using less aggressive actions
Compact is optimized so that if there's nothing to remove (post merge), then the command will finish immediately. Similarly, if the compact determines that only a small amount of space would be recovered, it also stop immediately and reevaluates again the next time it's run.
So the capture and compact command can be scheduled to run every time the volume connects, and they won't waste hardly any time.
Merge, like compact, will first determine if there's anything to do and, if not, will stop almost as soon as it was started. So you can schedule the merge to run repeatedly, but if you configure it so the first tier to merge is broad (say with week tiers), then there will only be layers to merge once a week. If the first tier to merge is month tiers, then it will kick in once a month. And, subsequently, the compact won't have anything to do until something is merged.
Using schedule conditions
Finally, you probably don't need your verify to run daily (unless you're paranoid). Once or twice a month should be sufficient. To accomplish that, you'll need to do a little schedule hacking.
Change your the verify action's schedule to something less often (say the 2nd and 4th Friday of the month). Now the verify will start at a specific time. But what if your archive isn't connected? If not, the action will simply fail with an "archive not found" error.
The trick is to add a "hold if no archive" condition to the action's schedule. This tells the action to start, but if the archive isn't online it will simply pause until it is. In your activity monitor you'll see "Verify My Archive ... waiting for archive". When you do finally connect it, the waiting action will immediately resume, perform its action, and go away until the next interval.
You can also use this technique on the other actions too, as an alternative.
Bonus tip: if the entire drive is dedicated to QRecall archives, you might consider adding a "cancel if free space ..." condition to your compact action's schedule, which will prevent the compact from running at all until the free space of your drive drops below a particular threshold. This usually prevents the capture action from doing anything for months, and then?boom!?it performs one massive (highly efficient) compact, and it's done for another season.
Let use all know if that helps.
|
|
|
|
|