Message |
|
Ralph Strauch wrote:After backing up with v1.1.0.12 the archive needed reindexing so I did that.
The message you got was a little misleading. The long version of that message would read "the names index has gotten out of synchronization with the archive because it was updated by an older version of the application that doesn't know anything about the names index. The names index has been erased. If you want to take advantage of the performance improvements provided by the names index, the archive will need to be reindexed."
When the process finished i got an error message telling me it had failed. Looking at the log it appears that the failure occurred early in the process, but it didn't stop the process or notify me till the end.
The problem that stopped the reindex occurred at the very last instant. The warning message at the beginning was because you started a reindex and then stopped it, leaving the index files completely scrambled.
My compressed log file is attached.
Next time try the new Help > Send Report... command.
I guess the thing to do now is to repair the archive, but I'll wait on that till I hear for sure from you.
Hold on for a bit, if you can. The error that you got is extremely puzzling and I'd like to do a little investigation first.
|
|
|
Ralph Strauch wrote:I downloaded the beta tonight and opened it from the dmg to take a look at it and see what was different. I thought I'd wait for the release version, though, to make the switch. I ejected the dmg and tried to back up using v1.0.1 and it told me I had the wrong scheduler and needed to quit and restart to reinstall the scheduler.
The problem is that the new version installed its own scheduler as soon as you launched the application. The older version won't uninstall the newer components because that could disrupt the installation for other users.
So then I decided just to install the beta and run from that. Now I'm getting the message that qrecall can't contact the scheduler, and restarting qrecall doesn't help.
I'm not sure what's going on now, but the scheduler is probably only half installed. Try uninstalling QRecall (hold down the Shift and Option keys and choose QRecall > Quit and Uninstall). Then relaunch QRecall and reauthorize it (QRecall > Preferences... > Authorization). If that doesn't solve the problem, toggle the "Start and run actions while logged out" option twice. This will uninstall and reinstall the scheduler. (A restart afterwards won't hurt either.) If you're still having problems using 1.1b12, send a diagnostic report and I'll look into it (Help > Send Report...).
What do I have to do to reinstall the scheduler so I can use the app again?
As a general rule, if you want to bounce around between versions of QRecall first uninstall the currently active version (QRecall > Quit and Uninstall) then launch the new one.
|
|
|
Got it. Thanks.
|
|
|
Ralph Strauch wrote:I currently use Apple's Backup program to maintain an offsite backup of important files on dot Mac (soon to be mobileme), and I'm wondering if qrecall might do a better job.
Alas, .Mac/MobileMe and QRecall are not a good mix. There's a long-winded discussion about why in the Online storage thread. But the short version is this: MobileMe is based on WebDAV which does not play well with QRecall.
Would the processing reduction gained by using smaller archives make it worthwhile to have separate archives for different groups of files, particularly for groups of files that I might want to update less frequently?
Probably not. There's not much overhead associated with the data in an archive that isn't being updated. The overhead of managing multiple archives probably outweighs the overhead of a lot of data that doesn't get updated that often.
Capture compression should reduce the amount of backup data to be transmitted, but would it also impose additional interchanges between qrecall and the archive that would negate these savings?
Compress doesn't involve any additional communications. Overall, less data is transferred to a compressed archive than an uncompressed one. The penalty for compressing an archive is computational: Every time data is accessed from the archive it has be be decompressed and all new data has to be compressed. That just takes a lot of additional CPU cycles.
Would merging and other management processes be inordinately time-consuming over the internet?
Yes. The best strategy would be to avoid them, or run them very infrequently (like once a month), if your only connection to the archive was over a slow communications link.
Do you have any other thought pro or con about using qrecall over the internet?
If the file server you are storing the archive on is using a "real" remote file system protocol (like an AppleShare server) then you'll probably be OK capturing a modest amount of data and scheduling infrequent merge and capture actions. However, if you are using any WebDAV based service (MobileMe, Amazon SSS, AOL, ...) then it's impractical to use QRecall on the remote archive directly. Instead, consider getting a small USB thumb drive or small external drive and performing your captures to that. Once a week, or once a month, upload the archive to your off-site data store.
|
|
|
Marc Farnum Rendino wrote:I hadn't heard this before; where can I find more info?
That's a good question. To some degree, I'm guilty of repeating the conventional wisdom of the backup industry. This has been considered a "fact" for more than a decade, but finding a reliable source to corroborate it is difficult. There's really very little information on the effects of long-term storage of HDDs. Even Google's massive study ( Failure Trends in a Large Disk Drive Population) essentially ignores the issue of shelf-life because no one uses hard drives that way. It's still a widely held belief that hard drives have a shelf-life of about 5 years. I'm very confident that hard drives have much shorter shelf-life than optical or tape media, but knowing exactly how much is problematic. A big confounding issue is that hard drive technologies continue to evolve. Drives made 5 years ago might have a shelf-life of 5 years, but drives made today might have a shelf-life of 20 — or 2.
|
|
|
Alexandra Morgan wrote:Alternatively, could you have QRecall run on the machine to which the backup drive is attached, instead of the machine to be backed up? In other words, the archive is local, the item to be captured is a network volume.
Yes, but with limits. The operating system doesn't have unfettered access to files on a remote volume. Everything accessed via a networked volume is constrained by the permission and security limits of the file server. Thus, it would be impossible to capture a user's operating system remotely — at least not in a form that would let you restore the volume and have it boot. If you just want to capture regular user documents, it shouldn't be a problem.
Can QRecall automatically mount a network drive to back it up?
QRecall won't automatically mount the volumes that contain items to be captured. However, you could probably hack something up without too much trouble: Create any kind of script that would cause the volume of the remote user to mount (just opening an alias to a folder on that volume would do the trick). Then schedule the capture to run after that, or even schedule the capture to run when that volume mounts (see event schedules).
Context: I'm dealing with a small workgroup, with about 10 to 13 desktops, and I have 3 large hard drives to back up onto, all between 500 and 750 GB each (because you get the most storage per $ in this size range).
I deal with this by having each client capture to its own archive. Not as much space savings, but all clients can capture simultaneously. The clients only have a single capture action. All merge, compact, and verify actions are set up to occur on the computer hosting the archive volumes.
... it appears that a network hiccup on a client machine resulted in a very corrupted archive.
Try the beta version. It has a new automatic repair and recovery feature that should instantly, and transparently, recover from 99% of problems that would result in a corrupted archive. This feature was added principally to deal with intermittent network communication failures.
Irk, I so want to get away from Retrospect already!!!
I'm right there with you.
|
|
|
W. Darson wrote:ow. so my questions are, what is "closing" an archive
Closing an archive encompasses a number of tasks. Mostly it involves updating the archive's various indexes.
and how long should i expect it to take?
That's a hard one to answer. The short answer is "it takes at long as it takes." In your instance, I suspect that QRecall is preoccupied updating the quanta index. This is the largest index and, since you just captured 47GB of new data, it has a lot of new data to index. Unlike the actual capture phase, updating the quanta index involves millions (in your case hundreds of millions) of tiny reads and writes. When capturing to a local hard disk, I'd expect a capture of 47GB to take another 10 minutes or so to index all the new data. I don't know anything about your network or your NAS drive, but how fast it can process these small read and write requests will substantially impact the speed at which it will take to finish the index. For example, if your network+NAS combination were working at 1/20 the speed of a local hard drive it would take 3-4 hours to finish indexing. If it works at 1/100 the speed (which is not unheard of) it could take 15-20 hours. Start up the Activity Monitor and see if QRecall is working. You should see substantial network activity (lots and lots of small packets). If you do, I'd suggest letting QRecall continue working. If not, let me know and we can look into the situation in more detail. The good news is that this doesn't have to be done again. On the next capture only the newly added data will added to the quanta index, and will be much faster.
|
|
|
Rodd Zurcher wrote:Only question I have (and I don't think it's beta related) is how long it takes a capture to "locate free space". I'm backing up two machines to the same archive. At first I thought the locating was fast when the same machine captured to the archive without the other in between (Capture A, Capture A; not Capture A, Capture B, Capture A). But I think an intervening Merge/Compact has more impact; but I'm not sure.
It has everything to do with the Merge. A merge removes two or more layers and replaces it with a composite of those layers. When its done, it may have removed items that may, or may not, share common data with other files still in the archive. The "locating free space" phase will occur on the next capture or compact following a merge. What QRecall is doing is something called "mark and sweep" in computer jargon. It first "marks" all of the quanta that are used by at least one file and then "sweeps" all unmarked quanta into the trash bin.
I'm backing up to an archive over AFS. Using both ethernet and WiFi. Of course the wifi is slower; but works.
How long this takes depends primarily on the total number of items records in the archive and fast it can read them. Reading millions of file records over a WiFi connection will take awhile. You can mitigate this by merging less often and only on the machine with the fastest connection. Schedule a merge followed by a compact on the one computer with the fastest access to the archive and remove the merge and compact actions from all other computers using that archive. If the AFS server is a computer (not a Time Capsule or other NAS), install QRecall and schedule the merge and compact on that computer. You do not need an identity key to schedule and run maintenance actions. If you have the disk space to spare, consider merging only once a week or add a condition to the merge action so that it only runs when the free disk space falls below some reasonable margin.
Is there anyway you could index the free space instead of "locating" it?
That's exactly what it is doing. It's "locating" the unused data so it can add it to the index of free space.
Also could you add to the log the amount of time spent creating the scribbles, and locating the free space (to the normal logs; not #debug).
That's a good idea. I'll add some message indicating how long it took to locate the free space. I might even be able to log the amount of free space it found. The timing of scribble files, however, is somewhat meaningless as this happens in parallel with the capture. It's only an issue if the capture finished before the initial copy does.
|
|
|
Thomas Traeufer wrote:The program, despite not working, also created thousands of small, invisible .PBsyncDB files. I didn't like that and now my question would me whether QRecall does something similar.
No. QRecall doesn't make any changes to the documents or folders it is capturing and it doesn't add any files as it goes. It will happily capture from a read-only device. Everything captured by QRecall is stored in an archive. An archive package consists of a single data file containing all of the archive information, accompanied by a small number of index files. The only other files it creates are a handful of support files (mostly in ~/Library/Preferences/QRecall and ~/Library/Application Support/QRecall), some log files, and some support components that get installed into the operating system.
|
|
|
Thanks for the progress report. I suspect the problem is a file pre-allocation bug that existed in Tiger. The bug was reported and Apple fixed it in Leopard. However, the Airport Extreme is probably using a code base that predates Leopard and thus still has the bug. QRecall used a workaround when running under Tiger that was recently disabled when running under Leopard. When the file-preallocation is being handled by the file server (the AirPort Extreme) rather than the client OS, the problem reappears. In 1.1(9) the solution was to go back to using the workaround for Tiger even when running Leopard. Unless you re-encounter the problem, I'll roll this change into the next release of QRecall.
|
|
|
This indicates a mixed installation of old and new components. Start QRecall and uninstall it (hold down the Option and Shift keys and choose QRecall > Quit and Uninstall). Start it again. Open the QRecall > Preferences and reauthorize. If that still doesn't fix the problem, download a fresh copy of QRecall, uninstall the old one (using Quit and Uninstall), replace the old QRecall with the fresh copy, then launch the new one and reauthorize. Let me know if that fixes the problem.
|
|
|
Glenn, Joe, I have a theory about what the problem is. Try this and let me know what the results are. Try this version: QRecall 1.1(9) beta Download the disk image and open it. Locate your existing QRecall application and drag it to the Trash. Copy the new version to your Applications folder and launch it.
|
|
|
Alexandra Morgan wrote:How long can this take, after a merge action in which no layer were found to merge?
How long it will take to locate the unused space in an archive depends on a host of factors too complex to give any simple answer. The fact the QRecall went looking for free space after a merge action that didn't actually merge anything sounds like a bug that I'll look into. It won't cause any problems, but is clearly unnecessary. If your network is rather slow and your archive contains a lot of files (the primary factor in now long the free space collection takes), you may want to schedule your merge actions so they happen only occasionally (like once a week). This will minimize the amount of time reclaiming free space.
|
|
|
Send me your log files (you can find them in ~/Library/Logs/QRecall). Also tell me a little bit more about the remote volume and the archive: How big is the volume, how much free space remains on the volume, and how large is the archive.
|
|
|
Joe, Thanks for the log file and crash reports. The problem was caused by a bug that incorrectly handled the situation where a volume does not support Leopard's directory change notifications. Try this version: QRecall 1.1(8) beta Download the disk image and open it. Locate your existing QRecall application and drag it to the Trash. Copy the new version to your Applications folder and launch it.
|
|
|
|
|