Message |
|
Norbert Karls wrote:Forgive me the length of this post, I tried to keep it short but failed.
Norbet, the length is not a problem. It's great to get detailed suggestions. Let me just address your suggestions one at a time:
• Prioritize Capture Actions: As the QRecallHelper's can be quite memory-hungry I had to limit QRecall to three concurrent actions. At times this can lead to a long queue of waiting actions, especially in the mornings after connecting to the backup drive and network location. I would greatly appreciate being able to tell QRecall to reserve one action slot exclusively for Capture Actions. A Compact or Verify Action can easily run for an hour, and there may be a handful of them waiting. Actual data snapshot Captures shouldn't need to wait for the backup application's internal data management.
It's interesting that you would suggest this. I've actually played around with an experimental scheduling feature that I think might provide what you want. The setting would limit the schedule from running more than one action of the same "type" (capture, compact, verify) at a time. So if three verify actions were queued up, only one would run at a time, even if you allowed more than one concurrent action. Having said that, I would suggest you experiment with fewer concurrent actions, and most definitely limit the number of actions that use the same destination volume to 1. Trying to run more than one capture action that's reading from the same physical drive can cause "trashing". This can slow down the actions (and your system) and end up taking longer than if you ran the actions one at a time. It's worth testing.
• Arbitrary timeouts for event triggered actions would be great: ... Being able to use not just 1?5min but maybe 10?20min timeouts, things would be much more relaxed.
This is already in the works to support new types of event schedules. It should show up in the next (major) version.
• Non-Capture Actions should be able to be triggered by Capture Actions:
This, and a couple of similar features, are on the to-do list, but still needs some design work. Chance of making it into the next major version is about 50%.
• Multiple event triggers should be supported: When I need to mount a .sparsebundle image as well as connect an external backup drive for a Capture Action to work, it would be nice to be able to configure the action accordingly. Right now the action just fails and is re-scheduled which doesn't do any harm, but it would be "more right" to watch both trigger events.
To some degree, they already are. You can schedule an action to start when the archive volume connects, and then add a "hold while no capture items available" condition that will wait until the capture source item appears. (You can also write the reverse: start when source mounts, then wait for archive volume.) If you're looking for something else, write and explain the situation in more detail.
• This isn't actually a scheduling feature but just let me add it here: It would be pretty cool to have one Capture Action use several backup targets instead of configuring several Capture Actions for the same data source.
Sorry, but QRecall just doesn't work that way.  It wouldn't result in any efficiency gain. When people suggest this, I think what they really want is a "waterfall backup" or "cascading archive" feature, which is still on the drawing board. I do hope I came across friendly and politely yet understandable. I'm delighted that you posted, and I hope I've addressed some of your issues.
|
 |
|
Jonathan Edwards wrote:Can there be an option to keep deleted items permanently? I currently have this set to '999 months'. I sometimes delete files from the source knowing I safely have a captured backup. I don't want to risk permanently deleting those items in the archive through Merge and Compact actions.
Welcome Jonathan, I do not have plans to add a setting to never remove deleted items. The original idea for the feature was not to keep deleted items forever—for fear that they would eventually consume all available archive space—but as a safety net for those "oops!" moments. That said, 999 months is as close to "forever" as you're likely to get. It works out to 83¼ years. So my suggestion would be to set a reminder for the year 2095 and either make a copy of your current archive and start a new one, or request this feature again.
Do you have any plans to enable high resolution icon and font support for Retina Macs please?
It's on the to-do list. Hopefully, in the next major release.
Keep up the great work!
We're on it!
|
 |
|
Bruce Giles wrote:Anyway, I think you need to do something better with the progress bar in that situation. It's really confusing (or misleading) when it goes all the way to 100% and then just sits there for tens of minutes. If the "locating unused space" step is not something you can quantify in order to show progress with the bar, then how about switching to an indeterminate ("barber pole") type progress bar instead?
It's a bug. It's supposed to display an indeterminate progress indicator. In the version that introduced the "Keep deleted items" option, a new phase was added to the compact action. After the initial archive analysis, QRecall walks through the recent layers looking for deleted items to expunge from the archive. This is associated with a progress bar that advances as each layer is processed. What's supposed to happen next is the "Looking for unused space" phase takes over, changes that status message, and switches the progress bar to indeterminate. What really happens is that the progress bar remains at 100%, left over from the "erasing deleted items" phase. I've fixed the code and it will eventually show up in future version of QRecall.
|
 |
|
David, You're doing it right. As a test, I started up a Parallels VM, created a file in the VM, shut it down, restored from an earlier version, and started it up again. The guest OS booted up just fine, and the file was gone. If you want to verify that the recall is happening correctly, open the VM package and check the dates/sizes. In the Finder, select the VM package, right/control+click on it, and choose the Show Package Contents command. Perform the Restore/Recall and check the dates and sizes of the restored items to make sure they are what you expect. I will note that there are pitfalls associated with trying to capture VM packages while the virtual machine is running/paused. Like databases, the VM software may have information caches in memory that doesn't get captured, so the recalled VM package is incomplete. I suspect, however, that this is not the case here. If everything looks correct, then my guess is that there's some other issue related to your virtualization software.
|
 |
|
Adrian Chapman wrote:This is very interesting, but no matter what I do I still see all layers.
Try switching to icon or time view and drilling down into a folder that contains items that change infrequently. The layers displayed/highlighted in the layers pane are those that contain any items visible in the browser pane. If you are in column or outline view, this will tend to encompass a lot of items.
|
 |
|
Gary K. Griffey wrote:My question is...why is this layer considered "unrelated to the items in the browser"?
Excellent question! When you are browsing items in an archive, QRecall identifies which layers contain data about those captured items. The other layers it either dims or hides (depending on your view settings). As an example, let's say you capture your home folder in a new archive, creating a single layer. Now let's say you make some changes in iTunes (modifying items in your Music folder) and capture your home folder again. When you open the archive and view your home folder, you'll see two layer. If you navigate into your Music folder, you'll see two layer. If, however, you navigate into your Documents folder, QRecall will only display one layer. The second layer will be dimmed or hidden. That's because the second layer only captured items in your home and Music folders. There are no items from your Documents folder captured in the second layer. By trimming down the layers pane to just the layers that are significant, it's easier and more productive to browse the layers. You only see the dates where the items you are looking at were captured, and if you shade layers to browse or recall earlier items, you'll be working with just those layers where something changed.
|
 |
|
Gary K. Griffey wrote:Greetings James, I have been using QRecall for many months now without any issues. This morning, I ran a capture action to an existing archive. The capture ended ok...and stated that it had captured a small amount of data (185 KB). When I subsequently opened the archive, however, no layer was created for today's capture. The last layer shows 02/03/2013.
With your archive open, see if you have a View > Show All Layers command. If so, select it. This menu command toggles between two modes: Show All Layers and Hide Unrelated Layers. The normal mode (Hide Unrelated Layers) only displays those layers that contain data relevant to the captured items in the browser pane. As you browse through your items, the layers display will change. If you select View > Show All Layers, the layer pane will show all of the layers in the archive regardless of what items you are looking at. Layers unrelated to the items in the browser are dimmed.
|
 |
|
Johannes wrote:I did some backup maintenance and deleted some items from different archives. Several times after that it happened that some layers became "incomplete". What does this mean?
An "incomplete" layer is one created by a capture action that was interrupted (canceled, stopped, ...) before it could finish.
Waht msut I do now?
Nothing. Just be aware that when recalling folders from that layer, that a folder may contain a mixture of captured and uncaptured files. This is normally not a big deal, but it can be in certain circumstances. For example, restoring a partially captured application bundle or system folder could have profoundly unpleasant results. An incomplete folder/layer will go away when it's merged with a subsequent layer that has fully captured all of the same items.
The archive size remained the same (I delete about 50 GB).
The physical space occupied by deleted data is not recovered until it is reused (during a subsequent capture) or by compacting the archive.
The status window tells me that 52 GB are unused. But the Inspector window says free = undetermined. Shouldn't give it the same number?
The status window shows an estimate/likely value for free space in an archive. Some actions, like verify, update this estimate. The inspector window shows the actual amount, but that figure is much more difficult?and time consuming?to calculate, and it is not always practical to have it up-to-date.
Is it normal that a simple delete of an item is not enough to recover space but I have to compact too?
Reclaiming the physical space occupied by unused archive records is a time consuming and expensive task. So this process is deferred for as long as possible, to be as efficient as possible. Here's how QRecall handles recovery of unused archive space: - Determining what archive records are unused is an expensive task (many minutes). To avoid doing this work unnecessary, it is only performed (by default) by the compact action following a merge or delete action. Once the unused space is identified, the unused records are erased (written with zeros), the actual amount of free space appears in the archive's inspector window, and subsequent capture actions will reuse this space to capture new data, avoiding the need to expand the archive. - Recovering the disk space occupied by empty archive records is an extremely expensive task (several hours). To avoid doing this unnecessary, it is only done with the compact action sees that the empty space in the archive exceeds a minimum threshold, which defaults to 4%. By using a minimum threshold, the compact action can usually avoid physically compacting the archive in most cases. Under normal use, an archive rarely contains more than 4% of unused space. When data is purged (through a merge or delete action), the next compact identifies and erases that space, and the next capture action will reuse it. So the empty space tends to fluctuate up and down, but QRecall can often avoid completely compacting the archive for months (even years). The recommended procedure is to infrequently schedule a compact action to run automatically, say once a week or once a month. The compact action will free and erase any unused data, calculate the unused space, and (possibly) physically compact the archive. Note: the free-space threshold is ignored if you perform the compact directly (from the Archive menu). Running the Archive > Compact command will always compact the archive, in full.There are two advanced settings that affect this behavior. Setting QRCaptureFreeSpaceSweep to "true" will cause every capture action, that follows a delete or merge, to sweep the free space in the archive. It means that capture will immediately erase and use any unused space in an archive, and the free space value for the archive will be much more up-to-date, but it will radically slow down the capture actions that have to perform this calculation. The other setting you can change is kCompactFreeSpaceMinimumRatio. You can set a ratio (between 0.0 and 0.9) to indicate the amount of free space (0% to 90%) that must be in the archive before a compact action will physically compact the archive. Setting it to zero causes the compact action to always compact the archive (not recommended).
|
 |
|
Johannes wrote:
Simple test: Exclude an item in Time Machine, perform a capture, and see if that item is in the archive.
It is. So the API seems to ignore the user defined exclusions.
Time Machine is apparently using its own mechanism for excluding specific items from the backup, just as QRecall does (the "excluded items" option of a capture action). But both will honor a "do not backup" flag set for an individual item through the backup API.
That means: I have to define (and maintain) several exclusion lists
Currently, "excluded items" in QRecall are on a per-action basis. So if you have multiple actions that capture that same set of files, the items to exclude should be equivalent.
|
 |
|
Johannes wrote:Does QRecall exclude only the items pre-defined by Apple or are all those items that I added manually in the Time Machine preferences option excluded as well?
When Apple introduced Time Machine, they also added an API that allows developers to designate specific items/paths that should not be backed up. These preferences are stored in a database in the operating system. QRecall (when "Items excluded by TimeMachine" is checked) and Time Machine both use this database to determine which items should be captured/copied. Simple test: Exclude an item in Time Machine, perform a capture, and see if that item is in the archive.
|
 |
|
Johannes wrote:Can both versions be used at the same time (not on the same archive of course)?
Only one version of QRecall can be installed at a time.
That would be crucial for testing.
For testing, I maintain a separate set of archives, actions, and preferences which allows me to switch back and forth between the two versions.
Nobody would risk a longtime archive for a beta test.
Exactly. A great amount of testing has gone into making sure the new version correctly uses the data in existing archives, but not having a regression path is flirting with disaster.
|
 |
|
Johannes wrote:Maybe I miss something very obvious, but somehow my verify actions don?t do what I want them to do. For each of my archives I have created a verify action based on an interval that fits the archive's capture frequency. For example, my short time archive that captures certain folders every hour has its verify action set to every 7 days starting from Jun 16th 2012 1:00 (the date I defined it, I guess). Now the status window shows a yellow LED complaining that it had not been verified for 13 weeks. The archive file is available all the time. Why does the scheduled action not run?
I can't tell you why your verify action isn't running. You should check for schedule conditions that might prevent it from running. If you don't find anything obvious, send a diagnostic report. There may be a clue in the log file. An interval schedule has an "anchor" date and time. The date/time is arbitrary, but all run times are then calculated starting from that date/time. So if you choose to run an action every three hours, and choose an anchor date of 1/1/2000 5:35, actions will run at 2:35, 5:35, 8:35, 11:35 and so on. If you want to run something once a week, I suggest choosing a daily schedule and uncheck every day except one (say, Sunday). Daily schedules are much more intuitive and they adjust for local time (change in time zone, daylight savings time, etc.).
What would be the best way to schedule a verify after each capture? (This would be useful for my off-site backups to make sure the are healthy before I bring them out of the house).
Give it the same schedule as your capture action, but schedule it to run one minute later. As soon as the capture is finished, QRecall will start a verify.
Any news when the command-line tool (or any other means to trigger an action by script) might come up? (That's really the only thing I miss at the moment)
Good news and bad news. The good news is that the rewrite of QRecall that uses Apple's approved APIs is now working, and has passed all capture/restore regression tests. The bad news is that the new version won't be forward compatible with previous versions of QRecall. So once you've captured something with the new version, you can no longer use the old version to access your archives. This means that the new version must be rock solid before I would even consider releasing it as a beta, and we're just not there yet. But stay tuned. I hope to get a new beta cycle started within the next couple of months.
|
 |
|
David Ramsey wrote:Still, I never thought I'd see the day when I missed C++...
If you're missing C++, things must be pretty bad. 
|
 |
|
David,
That's a really interesting issue.
The problem is that the QRecall scheduler calculates, and remembers, a number of times associated with each action. Because some of your actions ran in the future, it's now waiting for a time beyond that date to occur?which is a couple of months away.
The first thing to try is to get the scheduler to recalculate the run time for the action. Open the action in QRecall, make a minor change (i.e. click a checkbox), set it back, and save the action. Any change to an action will cause the scheduler to reevaluate its schedule, which should reset its next run time.
If that doesn't do the trick, here's the hammer:
- Hold down the Option+Shift keys and choose QRecall > Quit and Uninstall. - Locate and trash the ~/Library/Preferences/com.qrecall.scheduler.plist file(s). - Launch QRecall and reauthorize it.
This will clear the scheduler's state and cause it to forget everything it knows about when actions were run and when they should run next.
|
 |
|
Adrian, Your archive is fine. The reindex fixed the issue and it hasn't reoccured. Looking through your logs, the "Negative hash map does not agree with contrusted map" message occurred once on 2013-01-01 00:02. At 2013-01-01 20:02 you reindexed the archive, and the capture and verify actions performed immediately after that show that the inconsistency was repaired (no message). The dozen or so actions that follow that all ran flawlessly, and didn't indicate any problems. I've always suspected that this issue is due to a rounding error when dealing with deferred hash table entries. The consequences of having a negative hash map that is slightly out of sync with the hash table are so trivial, however, I've never dedicated a lot of time to tracking this down. Should this issue reoccur, you can choose to ignore it or reindex the archive, whichever you're feeling like that day.
Adrian Chapman wrote:It's just a warning I think - a little blue exclamation mark rather than a slightly more scary triangular yellow exclamation mark.
Tip: Hover your cursor over the icon and a tool tip will explain the type/severity of the message.
|
 |
|
|
|