Message |
|
Robby Phlig wrote:An internal program error occurred. Please report this problem to the developer.
Robby, Please send a diagnostic report: Help > Send Report... We'll review the details and get back to you.
|
|
|
I think the confusion is due to how the "View deleted items" view option works, which is different than the "Keep deleted items" archive settings. I'll try to clarify using the following example: (file F created) Layer 1: captured a folder containing file F. (file F deleted) Later 2: recaptured the same folder, this time there is no file F. The archive now has two layers. Layer 1 contains file F, layer 2 doesn't. File F is now a "deleted item". Opening and viewing this archive (with all layers shown) will not display file F, because in layer 2 file had been deleted. If you then turn on the "Show deleted items" view option, file F will appear as a deleted item (hash background) because it previously existed in an earlier layer (1) but doesn't exists in the lastest layer (2). At this point the file still exists as a regular file in layer 1. Now, merge the two layers with the "Keep deleted items" setting turned off. When layers are merged only the most recent version of each item retained and files that have been deleted are removed from the archive. After the merge there is only one layer, and there is no file F. If you turn on the "Show deleted items" view option, you still won't see file F because it isn't there. If the "Keep deleted items" setting had, instead, been set prior two merging the two layers then the file wouldn't have been removed from the archive. Assuming the latest version was within the "Keep" time period, instead of expunging the file from the archive QRecall would preserve the latest version in a special file record marked as having been deleted. In the second scenario, if you once again open the archive you don't see file F. Even though the file is stored in the layer, it isn't shown because it's a "deleted" file. If you then turn on the "Show deleted items" view option, the file once again appears as a deleted item in the browser. So the "Show deleted items" view feature will show items captured in earlier layers that have subsequently been deleted. It will also show the special "deleted" items preserved in a layer via the "Keep deleted items" setting. The deleted items perserved in a merged layer are finally deleted for good under two circumstances. If the layer is merged again, and this time the deleted items it contains are older than their keep-until date, the items are deleted just as they would if the "Keep deleted items until" setting was off. The compact action also sweeps merged layers looking for preserved items that are now too old to keep. Now, back to your questions:
That is a "deleted file" in my understanding and it is not created by any merging actions but simply derived from a change in filesystem.
Correct.
According to my understanding, such a file (after a grace period set in archive preference, in my case 200 months) will be completely erased in any case by a compacting action/command even if I don't merge any of the archive layers. Based on your reply it seems that, if I don't merge layers N and N+1, the files that I deleted in finder between the captures remains forever in the archive (as they belong to layer N that is still there).
That's correct. A "deleted item" mean "doesn't exist in a recent layer, but may still exist in a previous layer."
The confusion rises from the term deleted used in two different meaning: what will be erased from a compacting action are the "unallocated" items (the layer which they belong to doesn't exist any longer) while "deleted" items from file system are preserved.
It shouldn't cause any confusion if you remember that the "Keep deleted items for at least" setting is not a "keep deleted items exactly XXX days" setting, it's a "Keep delete items for at least XX days". It doesn't set a fixed window in which all deleted items are removed from the archive. It simply creates a grace period in which items that would have normally been deleted by a merge action are, instead, not deleted. Once the item is older than that grace period, the normal item removal rules apply.
May I suggest to use two different terms for the two file states? The Help is not so clear as well about "deleted items".
I don't think there needs to be two terms, because it shouldn't matter. From the user's perspective there are only "deleted" items. How long they exist in the archive is a combination of how the layers are merged and the "keep deleted items ..." setting. If you want to see the difference in the browser, there's an advanced setting (QRBrowserDistinguishDeletedItems) that will reveal the difference between a deleted item captured in an earlier layer and one being preserved in a merged layer. See the 1.2.0b46 release notes for an explanation and a screenshot showing the difference. You can also disable the compact's deletion of "deleted" items by changing the QRCompactErasesExpiredDeletedItems advanced setting to false.
|
|
|
pirem71 wrote:As for the recovered space I think I never used Merge command/action on that archives (in which I want to preserve everything has been captured) but not 110% sure.
If you never use the merge command, then it doesn't matter what the "keep deleted items" setting is set to. Deleted items are created during a merge action when a later layer does not contain an item that previously existed in an earlier layer. In this situation, the merge would normally purge the older items from the archive, leaving no trace of the item in the archive. The "Keep deleted items" option overrides this and preserves the last captured item as a special "deleted" item in the merged layer. But if you never merge layers, then the archive will have every item ever captured and can't have any deleted items.
On the other hand I used for sure the Archive>Delete Items... command; maybe the files are deleted but the space is reclaimed only during subsequent compacting.
Space for any deleted data (items or layers) may be reused in subsequent capture actions and is always recovered by the compact command. Reusing empty space during a capture is never a perfect fit, so there are always scores of small empty "holes" in the archive. The compact action moves records around so there are no empty spaces, and then truncates the size the archive to release that unused space back to the filesystem.
Could that be the reason for the "Erased deleted items from layer ..." messages?
No, but if you've only deleted items and have never merged, then that's kind of a special situation that I'll have to look into.
|
|
|
pirem71 wrote:on a few of my archives I set the preference "Keep deleted items for at least" to a very high value (200 months) so to force Qrecall to keep all deleted files and don't erase them during a compact activity.
Hmm, maybe I should add a "forever" to that option.
I launched a Compact command on a couple of that archives and found on Log that: 1. there are a lot of entries "Erased deleted items from layer xx" 2. Qrecall was able to free and recover some space (a few hundreds MB out of 20-30 GB) How is it possible to free and recover space if no files are to be deleted?
The "Erased deleted items from layer ..." message is surprising. This should only get logged with the calculated save-until date (midnight of today - keep deleted items duration setting) is later than the capture date of the layer. I would suggest double checking the settings for that archive. If the setting really is 200 months, please send a diagnostic report and I'll look into it. Free space in the archive is also caused by merge actions.
By the way, besides Compacting archives, do you suggest any other maintenance activity on a disk storing Qrecall archives?
I consider the occasional verify to be essential. Compacting is optional (unless you run out of disk space), but periodically desirable for performance reasons.
Will defrag be useless or even dangerous?
Defrag won't help much. It will theoretically help the verify action (which reads the archive from beginning to end), but unless the archive is badly fragmented (which is unlikely) I doubt the difference would be enough to measure. A QRecall archive is its own kind of filesystem, which the compact action optimizes and defragments. Having that filesystem hosted by another (possibly fragmented) filesystem won't make much difference. As for the danger, older defragmentation applications could cause irreparable harm if they were interrupted while defragmenting the volume. Modern ones, however, go to great lengths to protect the volume from being corrupted if the defragmentation process is interrupted. So as long as you're using a recent version of something like iDefrag, I think it's perfectly safe to defragment your archive's volume. I admit to having done it several times myself.
|
|
|
Johannes wrote:Would it be possible to tell a QRecall Action to capture to two Archives?
This has been suggested before, and is on the list of features to consider. The issue, for me, is the utility of such a feature vs. the amount of damage it would do to the interface. There are a number of different ways of doing this. In it's simplest form, an action would act on multiple archives. But the only benefit would be to reduce the number of actions you have to maintain. That's not a horrific burden, and the way it works now doesn't actually prevent you from accomplishing anything, which makes the merits of such a feature rather low. On the other hand, there are more sophisticated ways in which this could be implemented (logical archive groups, fail-over lists, etc.) but those add a lot of complexity and ambiguity to the interface. So much so, that it's hard to imagine that the benefits would outweigh the potential confusion that would arise or the amount of code that would be required to implement it. So as of now, I'm throughly ambivalent about adding such a feature.
|
|
|
Footnote: Notice that I don't use rsync's --compress option. That's because both of these archives have compression turned on, and trying to compress the data twice would just slow things down. If the archives didn't use compression, I'd probably add the --compress switch to the rsync command.
|
|
|
Gary K. Griffey wrote:I will take a another look at rsync. I have tested with it before...but never had much luck getting it to perform block changes only...possibly, my options settings were incorrect.
Here's the setup I use. My server is reachable via the Internet and has ssh (Remote Login) enabled. This allows rsync to securely connect to the server and transparently start the rsync server on the remote system. My two systems have also been configured with a pair of public/private security keys so that my local system can connect to the server via ssh without requiring a password. The server approves the connection by matching the locally stored key with its signature, which has been saved in the ~/.ssh/authorized_keys files on the server. You'll either need to do this with your systems, or explore the various options for opening an ssh connection with a password. If you're security conscious, this isn't recommended as it usually means your password will be exposed as plain-text somewhere. With that setup out of the way, I run the following shell script every morning on my local system.
#!/bin/bash
# Mountain Lion: run caffeinate in the background so the system doesn't sleep
caffeinate &
# Download updates to the daily backup on Red King to the local drive
server='username@my.server.com'
backups='/Volumes/Backups'
archive='Teacup.quanta'
echo "$(date): Downloading ${archive} from ${server}"
rsync --recursive --delete --times --verbose "${server}:${backups}/${archive}" '/Volumes/Local Backups/Server'
echo ""
archive='Important Stuff.quanta'
echo "$(date): Uploading ${archive} to ${server}"
rsync --recursive --delete --times --verbose "/Volumes/Local Backups/${archive}" "${server}:${backups}"
echo ""
echo "$(date): Synchronization complete"
# kill the caffeinate process; we're done now
kill %1; sleep 1
echo "==========================================================="
This script first downloads any changes from the server's backup archive (Teacup) to a mirrored copy on my local system. This allows me to have a local copy of my server's daily backups on hand for recovery. Next, it uploads an archive named "Important Stuff" to the server. I routinely archive important projects I'm working on to this archive. This maintains an off-site copy of all of my important documents. It's this second use of rsync that you'd be interested in. The script is launched with a crontab entry:
0 3 * * * /Users/james/bin/dailyarchivesync.sh >> '/Volumes/Local Backups/Server/rsync.log'
The redirection lets me maintain a log of all upload/download activity for later review.
|
|
|
Gary K. Griffey wrote:Greetings James... 1) A new QRecall archive is created at site "A" that includes one or more of these virtual disks. Even with the best compression and highest shifted quanta options...this archive could easily reach 100 GB's in size.
Aside: Shifted quanta detection rarely helps with virtual machine files (which are essentially disk images) because disk images are organized into blocks so data can only "shift" to another block boundry. Shifted quanta detection is looking for shifts of data at the byte level. I'm not saying cranking up shifted quanta detection won't make any difference, but it will add a lot of overhead for very little gain. Now, back to our regularly scheduled program...
2) This archive is then copied to an external drive...that is physically relocated to site "B". Now, the problem statement. When the archive at site "A" is subsequently updated with a recapture operation of the virtual disks...I need a way to "refresh" site B's copy of the archive...preferably via a network connection....just the delta data would be transmitted, of course...then the archive at site "B" would somehow be "patched", for lack of a better term, and thus be a mirror of site "A"'s archive. I have used many diff/patch utilities in the past to mimic this functionality...but they were all geared toward single binary files...not a package file/database, as QRecall uses.
A package is just a collection of files. Synchronize all of the files in the archive's package, and you've sync'd the archive. Gary, I do this using rsync. I have a couple of archives that I maintain off-site mirrors of. I do this by running rsync once a day/week to mirror the changes made to one archive with another. Since QRecall adds the blocks of data that changed, and rsync only transmits the block of data that have changed, the two are almost a perfect match. The end result is that rsync will transmit pretty must just the new data captured by QRecall and not much else. To do this over a network requires (a) one system with rsync and a second system running an rsync server or ssh, (b) a fairly fast network connection, (c) a generous period of time in which neither system is updating its archive, and (d) more free disk space than the size of the archive. I schedule rsync (via cron) to run at 3AM every morning. It uploads an archive of my important projects (30GB) to my server and then downloads the running backup of my server (175GB) to a local drive. This process takes a little over an hour each day and typically ends up transferring about 1GB-1.5GB of data. One of the drawbacks to this scheme is in how rsync synchronizes files. rsync first makes a copy of a file, patches the copy with the changes, and finally replaces the original with the updated version. For small files this isn't any big deal, but for the main repository.data file (which is 99%) of your archive, this means the remote system will first duplicate the entire (100GB) data file. This requires a lot of time, I/O, and disk space, but is really the only downside to this method. My tip to making this work efficiently is to minimize other changes to the active archive. Schedule your merge actions so they run only occasionally (weekly at most), and compact the archive only rarely. Merging creates lots of small changes through the archive, and compacting makes massive changes. The next rsync will be compelled to mirror these changes, which will require a lot of data to be transmitted. I keep giving this problem a lot of thought, as there are more than a few individuals who want to do this. I have a "cascading" archive feature on the to-do list, which would accomplish exactly what rsync does (although a little more efficiently). But I still don't like the design restrictions, so I keep putting it off.
|
|
|
Gavin Macfarlane wrote:It also raises the question of the Show Invisible Items menu item, does that behave in the same way? I would hope not
QRecall tries to be as WYSIWYG as possible. When you perform a recall you'll recall exactly what you see in the browser window. The exception is invisible files. When recalling a folder, all of the files that folder contains (both visible and invisible) are recalled, irrespective of the view settings.
|
|
|
Gavin Macfarlane wrote:I discovered that items and applications that had previously been deleted had also been restored
Gavin, It sounds like you had the Show Deleted Items option enabled in the View menu. With this option set, any recall action will recall not just the most recent items, by all previously deleted items from earlier layers. When you merged all of your layers, those previously deleted items were permanently removed from the archive, so the Show Deleted Items option now had no effect (since there were no deleted items to show/recall anymore). There's a stern warning in the help about recalling packages and volumes with this item turned on, but I'm considering adding a warning dialog to the recall command. Showing deleted items makes it really easy to find and recall lost documents, but it's not an option you want turned on when recalling packages, applications, or system files.
I also discovered that the restore process was much faster if I erased the target drive
When you recall/restore over existing items, QRecall takes the time to compare what is being recalled with what's already on the volume. QRecall then only modifies what's changed. By erasing the volume first, you saved QRecall that work.
|
|
|
Adam Horne wrote:Any idea when the beta will be ready?
No firm date yet. I was hoping by the end of September, but that ship has already sailed. It turns out that there's a lot of low level file access code to replace.
|
|
|
I agree that the menubar status item icon is a little overloaded. There's only so much information you can impart in a 20x20 icon. If you prefer to see the activity indicator at all times, you can disable the warning/problem indicator in the menubar icon by unchecking the Show status warnings in icon option in the QRecall monitor preferences. Then you'll only see the warning/problem summary icon when you drop down the status menu (it appears next to the Status Window item).
|
|
|
Johannes, All good suggestions.
Johannes wrote:I would suggest an additional entry in the Context Menu of the Status Window: "That's okay". This would set the state of that archive as if a verify/capture has been performed. Alternatively a submenu with a few entries like 1 day, 1 week would do a similar job.
On the to-do list is a new menu item to ignore a capture/verify status forever or until the next capture/verify. I think that would solve most of the issues people are encountering with the current status indicators.
Another thought: The main issue with the red/yellow indicator in the Menu item is, that I now longer see the other indicators like running and paused.
That information is (still) in the QRecall activity window. The activity and status windows serve different purposes. The activity window shows you what is happening right now, while the status window is an (mostly static) overview of the health of your archives. If you're not seeing the activity window, check your monitor preferences in the QRecall application.
As we are on the Status Window Context Menu, I would find a few more items handy: - Open Archive in QRecall - Capture now - Verify now
Menu commands to capture/verify now were already on the to-do list. They were actually intended for version 1.2, but an architectural conundrum in the code base prevented an easy implementation. In version 1.3, you'll be able to directly run any of your actions from the QRecall status item menu.
Johannes (looking forward to 1.3 beta and the scripting options
So am I.
|
|
|
Adam, The error you're getting is, I believe, a bug in Mountain Lion's implementation of FSCopyObj, the core library function that copies files. It's not "technically" a bug, because Apple has deprecated the core library file services API in Mountain Lion, which means that they are no longer maintaining or supporting those functions. The problem is that after FSCopyObj has duplicated a file, it supposed to return a reference to the new (duplicate) file. On your new volume, that reference is invalid for some reason and QRecall can't open the file it just duplicated. (This is not a new bug; both Leopard and AppleShare file servers, which include airport base stations, have a similar bug.) Many moons ago I cobbled together a workaround that duplicates the file using other means. If you're interested in trying that code, let me know and I'll build a special version of QRecall 1.2.1 that doesn't use FSCopyObj. I'm in the process of rewriting all of the low-level file functions to use the (ancient, but now official) BSD API for all filesystem services. If all goes well, a beta that uses the new filesystem APIs will be available for testing soon.
|
|
|
Ralph, You can probably just ignore it. The repair marks a layer as "incomplete" when it can't be absolutely sure that it contains all of the items it originally had. If the damage occurred during a capture that was interrupted, the result is a bunch of duplicate records in the archive. The repair ignores these duplicate records, but finds them suspicious enough to mark the layers containing those records as "incomplete." There shouldn't be any data loss. There's a discussion of this very issue in the help. See Help > Advanced > Compact, and look for the sidebar "If a compact is interrupted..."
|
|
|
|
|