Message |
|
Gary K. Griffey wrote:I observed a system log message being repeated literally thousands of times during the capture operation.
Welcome to beta testing. The beta version of QRecall spits out tons more console and log messages than the release version, mostly so I can diagnose problems reported by beta testers. In this case, it's the QRecall Spotlight plug-in, which is normally quite laconic. But you make an interesting observation. (And see, one that you wouldn't have made if I left those messages off!) Normally, Spotlight shouldn't reindex the archive until the capture (or whatever) is finished. I'm surprised that it would repeatedly be reindexing the archive while a single capture was in progress. But it's hard to tell from the fragment that you've sent. All of that activity is a single reindex. You'd need to look for multiple occurrences of the message "mdworker[xxxx] _RepositoryNamesImporter /Users/Gary/Downloads/melsLaptop.quanta/filename.index" during the course of a single capture. If that was happening, then that's something I need to look into.
|
 |
|
Sheldon Furst wrote: Might I suggest that the program defaults be set somewhere in the mid range for the Archive settings so that new users don't feel as I did?
That's a tough one. It used to be set on low-to-middle by default, but people with some really large archives were getting abismal performance so I set them to off by default. On the other hand, the duplicate data performance for large archives has improved radically since those days, so maybe the time has come to turn it back on for new archives again.
|
 |
|
Sheldon Furst wrote:How can I make QRecall "recall" a file back to its original location? When I do a recall it always asks me where I want to recall it to. Most of the time I want to put it back where it came from.
Use the "Restore" command. A Restore is just a recall with the destination set to the item's original location. See Help > QRecall Help > Capture Basics > Restore for the details.
|
 |
|
Cody Frisch wrote:I just find it interesting that here is a program that seems to be written by one guy, and its far more capable than applications written by companies with several employees. (Not to mention much less expensive.)
I'm glad to hear that. I would certainly like to have several programmers working on QRecall; I know that I'd like to add features faster.
One thing I was thinking could be useful, is compression rules on an archive. Exclude certain file types (mp3, mov, mp4) from being compressed at all since they aren't that efficient to compress again. Trade off by focusing the compression on those things which can really benefit.
I seriously considered this when developing the compression feature. I know that Retrospect (and others) have compression filters. However, I ultimately rejected it for two reasons: (a) I think it's too complicated and (b) QRecall doesn't work the way that other backup programs do. Hopefully (a) is obvious, but (b) requires a little explanation. Instead of turning compression on/off on a file-by-file basis, QRecall measures the cost/benefit of compression on a block-by-block basis. When you have the compression level set low, QRecall uses a fast-but-not-necessary-efficient compression algorithm on every block of data. It then examines the results of that compression. If the compression really didn't save much, it throws it away and stores the uncompressed copy of the data record. The theory is that the little bit of data saved doesn't make up for the amount of CPU required to decompress it. In this way, QRecall is self throttling, automatically opting to compress data is the highly compressible but not compressing data that isn't. This is smarter than file-by-file filters because it can even decide that some parts of a file are compressible while others aren't. At higher compression settings, the compression algorithm used is slower and more CPU intensive, and the threshold of for accepting compressed data is lowered. One of the big differences between QRecall and other file-based backup software is that the data in files is compared to that data already in the archive. Using Retrospect (or Time Machine, or whatever), if you edit the metadata of an MP3 file the software will copy the entire file again. If compression is enabled, it will compress the entire file again. QRecall, on the other hand, reads the MP3 file and compares it to what's already in the archive. Most of the file hasn't changed, so almost no new data is compressed. If the duplicate data in the file wasn't compressible, then QRecall has already determined that (when it captured the data the first time), so no compression or decompression is performed. If the data was compressible, then it only needs to be decompressed (much faster than compressing) in order to compare it with the source data before a decision is made not to add it to the archive. So in the end, I feel that QRecall's compression features is much easier to configure, smarter, and ultimately more effective than hand-tweaked per-file compression filters.
|
 |
|
Cody Frisch wrote:This is what I have wanted my backup software to always do.
Awesome.
I'd definitely love to see more filter capabilities, especially finder labels.
So would a number of other people; it's a common feature request. I have advanced filtering features tentatively scheduled for the 1.3 release (currently working on 1.2). A bit off in the future, but I'm sure I'll get there.
|
 |
|
Gary K. Griffey wrote:Yesterday, after adding the 4th layer...I ran a QRecall Verify...as I always do...and the verify failed...
That's correct. The verify detected corrupted data in the archive and/or on your hard disk.
I tried to Repair the archive...but this also failed.
Actually, the repair was successful. QRecall did log warnings about the problems it found and what items were affected, but the repair finished successfully as was confirmed by the verify action that you performed afterwards. Note that many (many!) archive corruption errors are often the side effect of a corrupted volume structure. I would encourage everyone to use Disk Utility to repair the volume containing the archive before repairing the archive. If your volume has cross-linked file allocations (for example), repairing will just set the archive up for future failure.
In looking at the Repair log entries...it appears that the rather large (about 30 GB) virtual machine disk file caused the error.
Also correct. If you did not select the option to recover damaged files, then the damaged version of that file has been deleted from your archive.
I know from other threads that you have written...that a virtual machine must be suspended or shutdown to make a valid backup of it.
That's very true. There is no backup system that can correctly copy a file that is being actively modified.
In this case, however, the laptop that contained the virtual machine was operating in target disk mode...and was mounted to my second laptop where QRecall is installed...thus...not only was the virtual machine shutdown...but OSX was quiesced on the source drive as well.
The problem wasn't that the file was being modified, but that QRecall detected that data previously stored in the archive failed its validity check(s). This can happen for a score of different reasons (data corrupted during transfer to the drive, random data loss on the drive, intermittent RAM errors, ...). But it has nothing to do with the source file or what condition it was in. I also applaud the rigor of your backup methodology, but I personally think it's a little overkill. While it's true that you can't make a "perfect" copy of your boot volume while OS X is running, QRecall works really hard to successfully perform live captures and recalls from/to your startup volume. It's certainly problematic, and you definitely want to quit as many applications as possible (certainly any VM and disk images that you might be writing to), but it's not absolutely necessary to shutdown the entire OS to make a decent backup. QRecall has the ability to capture while you're logged out, and you can even schedule captures to run while you're logged out (or not run while you're logged in). Just food for thought. I only mention this because I firmly believe that the back up strategy that works best is the one that gets used, and the one that gets used is usually the one that runs automatically, independent of the user. I'd be much happier with an imperfect backup that occurs every day than a perfect one that I get twice a week—if I remembered to do it. You also might consider a two-tiered backup strategy. Make regular (even hourly) captures of your regular documents, excluding things like your movie library and virtual machine images, and then continue with your complete backup strategy on a weekly or bi-weekly basis.
Now, I did run a Compact operation on the archive during the week with the 3 existing layers...and I also changed both the Compression and Shifted Quanta detection on the archive...but I never had issues doing this before to an existing archive.
That shouldn't have had any bearing on the problem you described.
|
 |
|
Adrian Chapman wrote:... there is an unknown damaged layer that I don't seem to be able to remove. although it doesn't seem to be doing any harm I'd like to be rid of it if possible. Any ideas?
A "-Damaged-" layer means that one or more of the items that were originally captured in that layer has been lost or damaged. To determine exactly what was damaged you'll need to refer to the log or examine the layer. The repair command will log everything it finds wrong with the archive and what it did about it. More than likely, your archive contains an invalid block of data (disturbingly common when capturing via WiFi) belonging to a file. The repair will mark that file as "damaged", or delete it altogether, depending on the amount of data lost. As an alternative to the log, shade (hide) every layer except the "-Damaged-" one, and then explore its contents. The items that were damaged will also be marked as "damaged" or "incomplete." It might be possible get rid of your damaged layer by merging it with subsequent layers. If the damaged item, or items, was successfully captured in a later layer, merging it with that layer will replaced the damaged (or missing) item with the successfully captured one. The resulting merged layer will be complete again. On the other hand, if the next layer did not recapture that item (because it didn't changed), the merged layer will as be marked as damaged. So it's possible to expunge your archive of damaged layers by merging, but you'll also lose all of the other intermediate changes that occurred between those layers. My advice is to simply be aware of the problem and be wary of damaged items when recalling from that layer. The damaged layer should eventually be erased through the normally scheduled merge actions. Also note that a file or folder marked as "damaged" in the repository forces QRecall to recapture that item during the next capture. Thus, a capture following a repair will recapture any possibly lost items in the archive.
|
 |
|
Adrian Chapman wrote:To keep the backup processes fairly slick on my Mac Pro, I perform a full backup of the boot volume once a day and 2 hourly backups of my user directory, but I have excluded three large folders from my user directory which only change infrequently, namely Music, Pictures and TV Programmes. These last three folders are backed up just once per day to their own archives. The effect of this is that my full volume archive is now more manageable and so connecting to it is relatively quick, important for the 2 hourly backups, it was taking about 5 minutes to connect when everything was in it.
Adrian, Your approach is sound. I do have one question: Are you running the release version (1.1.4) or the current beta? I ask because the beta has significant performance improvements, particularly in the amount of time it takes to add relatively small captures to an archive. If you're not using the beta, you might give it a try and see if you can include your music and video files with acceptable performance. (Remember that you can always delete those folders from your archive if you want to go back to your two archive arrangement.) Regardless, what you have set up seems perfect. I might also suggest scheduling a verify action (again, on the MacMini), that runs about once a week. It's good to periodically check the integrity of the entire archive.
|
 |
|
Gary K. Griffey wrote:Possibly, in a future release...QRecall could provide the ability to override the use of the file system events via preferences...
I will add that to the wish list.
|
 |
|
Gary K. Griffey wrote:This does, however, raise some rather disturbing questions in my mind...most notably...is this happening with other QRecall captures that I rely upon...and am I simply not aware of it?
It's entirely possible. If you are regularly moving source volumes from one system to another, then any software that relies on file system events to detect changes on those volumes should be treated with suspicion. This issue will also impact users who maintain multiple instances of the operating system (often for testing) and who regularly reboot their system from alternate volumes. These are, however, uncommon scenarios so it isn't a problem for most users. Most of the volumes/devices uses to store documents—the kind of volumes that you would capture to an archive using QRecall—do not get passed around from one system to another. For volumes that are occasionally shuffled between systems, the 7 day setting of QRAuditFileSystemHistoryDays insures that they will get captured correctly sometime in the next few days. The volumes that are regularly shared between systems are more likely be used to backup to (rather than from). The volume containing the archive is immune to this issue. The best I can recommend is to be aware of the limitations of OS X's file system events log; if it's a problem for QRecall, use the QRAuditFileSystemHistoryDays setting to limit its impact or ignore it altogether.
|
 |
|
(This should probably should be a new topic, but JForum won't let me split an existing thread....)
Gary K. Griffey wrote:This morning...I performed the exact same procedure...I restarted the laptop in target disk mode...attached it to my second laptop using a FireWire cable...and attempted a QRecall recapture of the entire drive...the recapture finished in about 10 seconds...
I suspect that you've been bitten by the file system events (a.k.a. FSEvents) service. I'll quote from the Advanced QRecall Settings page:
Leopard's folder change detection is not foolproof. There are a number of obscure situations where the file system will not accurately report the changes on a volume.
One situation where it isn't foolproof is (drumroll) moving a drive between different systems— especially if those systems aren't running the exact same version of the OS. What happens is that the volume's file system change log gets reset, and when QRecall queries the volume for changes it comes back with nothing (or very little), and QRecall skips capturing items that have, in fact, changed since they were last captured. You can verify this by looking in your QRecall log. Open the QRecall log window and slide the details control all the way to the right. In the log messages for the capture you'll find something like this:
Locating changes since Tuesday, August 3, 2010 7:30 PM
Collected 117,752 folder changes If the number of changed folders was zero or very small, then the operating system has lost the history of changes for that volume. QRecall knows that the file system change log isn't reliable and will periodically ignore it. Unfortunately, the default period is about a week:
To guard against this, QRecall only trusts the operating system for a limited amount of time. After that (approximately 7 days) the capture will ignore the system and perform a deep, exhaustive, scan of the entire directory structure looking for changes. Once the deep scan is complete, QRecall will again trust the operating system's change detection for another 7 days.
You can work around this problem be reducing the amount of time QRecall trusts a volume, or disabling the feature altogether, by setting the QRAuditFileSystemHistoryDays advanced settings. Put your MBPro into target disk mode, plug it into your other system, open a Terminal window, and issue the command:
defaults write com.qrecall.client QRAuditFileSystemHistoryDays -float 0.0 This will completely disable QRecall's use of the file system change events log. It will, instead, check every file and folder for changes on every capture. Note that this can significantly increase the amount of work QRecall does on each capture. Capture your MBPro. QRecall should find and capture all changes on the volume. When you're done, restore the default be deleting your custom setting. Again in Terminal, issue the command:
defaults delete com.qrecall.client QRAuditFileSystemHistoryDays In the future, you have the option of repeating these steps each time you capture your MBPro or you could leave QRecall's "trust" period set to something much smaller than its default value of 6.9 days. For example, setting it to 0.9 would mean that a second capture that's more than 22 hours after the previous capture would trigger QRecall to perform an exhaustive scan for changes.
|
 |
|
Gary, Thanks for the feedback. It's always good to see real-world numbers—especially when those numbers show improvement. 
|
 |
|
Wayne Fruhwald wrote:After updating my production copy of Qrecall to the latest beta (1.2.0 Beta 6) one of my archives got corrupted. I'm not sure if it is directly related or just a coincidence.
Your guess is as good as mine.
During the required "repair" operation I have received multiple "Transient data read error; re-reading the same data was successful" errors.
When QRecall reads a block of data and finds it to be corrupted, it requests a second (unbuffered) read from the drive to a different RAM location. If the second read returns uncorrupted data, it logs the message you are seeing and continues. This message means that the data on the drive is, more than likely, OK and that the data is being corrupted somewhere "in the pipe." The data could have been mis-read from the media, scrambled during transfer through the interface (USB/Firewire/eSATA), or could have been dinged by bad RAM. If this message persists, I would suggest a thorough RAM diagnostic. This wouldn't be the first time that bad data was caused by failing RAM. I would also try to test the drive using a different interface (i.e. switch from Firewire to USB), if that's an option, and see if the errors go away. It's also instructive to notice if the file location associated with the error changes. If the problem persists at the same location, it would point to a media problem. If it changes, then it would indicate transient corruption elsewhere.
Do you automatically re-write the data to a new location on disk to prevent future transient data read errors? If not, is there an option to force you to as those areas on disk are not most likely to go bad so I would like to have the data relocated to a safer place.
Sector sparing is now the purview of modern drive controllers. This should be done automatically by the hard dirsk controller, and it's no longer possible to do this from software—which is a good thing, if you ask me. It should be noted that the error you are seeing occurs while reading (not writing) the data. The fact that the data can be re-read means that it's probably stored on the magnetic media OK and that the problem is in the transportation of the data, not its storage. Detecting bad sectors on write, sparing those sectors, and writing the data to a more reliable region of the drive is the job of the hard disk controller. If it's not doing that, then you need a new drive/controller.
|
 |
|
Iain Farquhar wrote:Recalled my desktop (happens to be quite a large amount of data as I've got an email archive there) Dismayed to find that all the data is copied back to my drive in a location as follows. MBP HD:private:tmp:InstantRecall.501:QRecall-30160298357  esktop
That's QRecall's "instant recall" feature. It reassembles a file from the archive into the system's temporary directory (/private/tmp) and opens it, allowing you quick access to items in your archive. It isn't intended as a way of browsing a large number of items.
I've quit QRecall but the file stays there.
Restart your computer or wait 3 days. The system periodically deletes items in the temporary folder that haven't been accessed in awhile, or whenever the system is restarted.
I would like to look at the backups without copying them back to my drive, does QRecall do this?
That's what the archive window is for.
I would really like a more sophisticated way to navigate my backup without copying all the files back to my boot drive.
A completely rewritten archive browser is currently under development and will be debuted shortly as a beta. If you'd like to try out the new browser and provide feedback and suggestions during development, I encourage you to download and install the beta at www.qrecall.com/download.
|
 |
|
Christian Roth wrote:is there a way to (globally, at least on one machine) put any scheduled operations on-hold?
Not at this time, but it's on the to-do list for 1.2.
|
 |
|
|
|