Message |
|
Gary K. Griffey wrote:You mentioned that the v19 beta release had some debugging code removed...were there other changes as well?
Nothing significant, beyond the changes to how concurrent operations are handled. I would not expect that to significantly affect capture performance.
The performance I am seeing, at least thus far, seems to be about what I was seeing in the latest production release. When I started using the beta I observed huge capture performance gains...this appears to be reduced in V19.
That's unexpected. It could be QRecall or it could be OS X 10.6.5. One thing I've noticed with 10.6.5 is processes spawning an inordinate number of threads. Even after the "fix" in b18 to address this, processes still run with considerably more threads than they did in 10.6.4. Yet while I find this perplexing, I still wouldn't expect additional threads to seriously degrade capture performance. I'll investigate this here. One thing you can do is gather some samples. You, and anyone else reading this thread, are welcome to send a diagnostic report after running b19 for a week or so. Say something like "b19 performance" in the comments. I can then write a program to extract the performance statistics from the logs for the past couple of months and see if there's a correlation between the upgrade and a drop in capture speeds. I'd be particularly interested in getting a report from anyone who's upgraded to QRecall 1.2.0b19 but is still running OS X 10.6.4.
|
 |
|
Glenn, Thanks for the post, and for the sample. It looks like QRecall is spending all of it's time reading, formatting, and sorting the items in the log window. If you close the log window does your CPU go back to normal? The log window running one CPU at 100% (and also making the GUI sluggish) is known issue. One that's only made worse by the beta, because it records a lot more detail in the log. (Even the stuff you can't see in the log window still has to be read, decoded, and sorted.)
|
 |
|
James G. wrote:OK, thank you for looking into it.
No problem. Please let me know what the outcome is, and you can put the Macfusion developers in touch with me if they have any questions.
|
 |
|
James, Your log file indicates that the file exchange request failed with an I/O error (-36).
2010-11-07 16:47:07.494 -0800 Failure Failed
2010-11-07 16:47:07.494 -0800 Details cannot swap files
2010-11-07 16:47:07.494 -0800 #debug# IO exception
2010-11-07 16:47:07.494 -0800 Details Path: /Volumes/x/Backup.quanta/layer_scribble.index
2010-11-07 16:47:07.494 -0800 Details Other: /Volumes/x/Backup.quanta/layer.index
2010-11-07 16:47:07.495 -0800 #debug# API: FSExchangeObjects
2010-11-07 16:47:07.495 -0800 #debug# OSErr: -36 Since there's actually not much I/O involved in exchanging files, and it happens every time QRecall attempts to exchange files on that volume, I'm going assume that it's a bug in the implementation of the filesystem. I'll add it to the list of anomalies to investigate, and I might even be able to develop a workaround, but I would suggest that you begin by contacting the developers of Macfusion, and pass along this information.
|
 |
|
James G. wrote:I'm getting the following error after a capture to an archive residing on an sshfs mount from Macfusion: Problem closing archive: Cannot swap files
Wild guess: QRecall occasionally uses a filesystem call that exchanges two files. It's possible that your Macfusion sshfs mount either reports that it supports this feature when it doesn't, or it's improperly implemented. Send a diagnostic report (Help > Send Report) and I'll look into the specifics of the failure.
|
 |
|
Gary K. Griffey wrote:One other clarification...is there any way that hidden files/folders can be viewed in an archive? When you open an archive...these folders/files are obviously not visible.
Play with the menu commands View > Show Invisible Items and View > Show Package Contents.
|
 |
|
Gary K. Griffey wrote:In the past, you have indicated that simply re-capturing the file/folder flagged as corrupted should basically return the archive to a healthy condition.
Specifically, recapturing items ensures that you have a recent copy of all of your current files.
The issue here is...how can one direct QRecall to overtly re-capture a file residing in an OSX hidden folder or file?
You don't have to. During the next capture, QRecall automatically detects the situation where the most recently captured copy of a file (regardless of which layer that resides in) has become damaged and will seek it out and recapture it, whether it has changed or not. (Note that this logic has been improved recently, and I consider it foolproof only in the current beta.)
In the normal archive Capture dialog...hidden folders/files are not enumerated....even if you change Finder preferences to reveal hidden files.
If you want to do this by hand, here's the trick (and it works in most any OS X application). The OS X open file dialog has a quasi-hidden navigation feature that will let you open any hidden folder, you just have to know its BSD path. Choose the Capture command. In the open dialog press Shift+Command+G (same as the Finder's Go To Folder... command). A dialog sheet will appear where you can enter the BSD path to the folder you want to see. In your case, type in "/private", click Go, and then select the 'var' folder to capture. This navigation shortcut also responds to path completion (using the Tab key), for those used to using this feature in the Terminal.
|
 |
|
Damian Huxtable wrote:Can you recommend a good file splitter for Snow Leopard?
I haven't used the lastest version, but I was always impressed with StuffIt Deluxe's segmented archive feature. Check out StuffIt Deluxe 2011 for a "Mac" solution.
Or a better workaround?
The ever awkward 'split' command-line tool still works its magic. If you don't expect to read the split archive regularly (if ever), you don't need to make a copy of the entire archive, just the 'repository.data' file inside the archive package. So something like
split -b 49m /Path/to/archive/MyArchive.quanta/repository.data /Volumes/OffSiteVolume/OffsiteArchive.data. This command will write the important portion of MyArchive to a series of 49MB files starting with OffsiteArchive.data.aa, followed by .ab, .ac, and so on. To recover the offsite copy, reassemble the OffsiteArchive.data. XX files back into a single repository.data file (cat .../OffsiteArchive.data.* > .../MyArchive.quanta/repository.data) inside any .quanta directory. Then launch QRecall and tell it to reindex the archive. QRecall will reconstruct all of the auxiallary index files from the master repository.data file.
|
 |
|
jay gamel wrote:Not a single one in a single place indicated. Why, then does the problem persist?
Well, there must be one somewhere, or at least there was one somewhere. First, let me remind you to restart. The message that you're getting is from launchd, the primary launch services daemon that controls the running and maintenance of most background processes in Mac OS X. launchd is configured entirely by the presence of .plist documents installed in /Library/LaunchDaemons, /Library/LaunchAgents, and/or ~/Library/LaunchAgents (at least for user installable processes, there are more of these folders in /System). Removing the .plist for a service isn't sufficient to stop it. After removing the .plist document, launchd either needs to be restarted (which basically means you need to restart your OS), or you can use a command like this in the Terminal: launchctl stop com.qrecall.monitor If neither the stop command or restarting your computer solves the problem, then you haven't eliminated all of the QRecall related .plist documents. launchd runs on these configuration documents, and it doesn't start services that don't have a .plist configuration document somewhere. You'll probably find the errant file in the ~/Library/LaunchAgents folder of some other user. If you find them in another user account, trash them and restart.
|
 |
|
Jay, It would appear that QRecall was installed at some point, but never completely uninstalled. You can manually uninstall QRecall. The steps for doing so are in the QRecall help, but since you probably don't have the QRecall application handy, here they are again: To Manually Uninstall QRecall: 1) Stop all running actions and Quit the QRecall application. 2) Delete the QRecallMonitor Login Item from your account preferences (Mac OS X 10.4 only) 3) Delete any files beginning with com.qrecall from the /Library/LaunchDaemons, /Library/LaunchAgents, and/or ~/Library/LaunchAgents folders. 4) Restart your computer. This should eliminate the problem that you're encountering. You can continue, if you want to be thorough. 5) Delete the /Library/Application Support/QRecall and/or the ~/Library/Application Support/QRecall folders. 6) Delete all files in ~/Library/Preferences that have names beginning with com.qrecall. 7) Delete the ~/Library/Preferences/QRecall folder. 8) Delete the ~/Library/Contextual Menu Items/QRecall CM plugin item. 9) Delete the QRecall application.
|
 |
|
Damian Huxtable wrote:Is it possible to combine two archives?
Not at this time.
|
 |
|
Prion wrote:but I assume this is harmless. Correct?
Probably. The details of that error message will tell you the name of the extended attribute that couldn't be read and the BSD error code reported by the OS as to why it couldn't be read. If you want, send a diagnostic report (Help > Send Report) and I'll take a look at it. Extended attributes are usually small, non-essential, bits of extra data attached to a file or folder. They may be important, but often are not. The reason you can't read them could be some restriction (via an access control list) on reading that attribute, or the attribute is malformed (I'm not sure how that would happen), it could have been a race condition (the attribute was deleted before QRecall had a chance to read it), or the directory structure of the volume could be damaged. If it continues to happen on the same file, then I'd start by repairing the volume using Disk Utility or something like Disk Warrior. If that doesn't fix it, you might consider deleting that file or stripping it of its extended attributes (cp -X). If you're interested in investigating the problem, you could try using the xattr command-line tool to list the extended attributes (xattr -l file) of the file and see if you get the similar problem reading them. If you do, try to delete the offending one (xattr -d attr-name file). There's no man page for xattr; use xattr -h to get its meager help info. If it doesn't happen again, then it might have simply been some other process changing the attributes at the moment that QRecall was trying to capture them. That can happen, and there's no way to protect against it other than to take another capture.
|
 |
|
Richard Kontra wrote:I am attempting to capture my home folder using the Capture Assistant. I have noticed that if Capture Assistant is not the active application, disk activity (I think it is scanning my home folder for files) ceases. I was wondering if this behavior is by design or is there some setting that will allow background processing?
Oddly, it is by design. I say oddly, because I went and looked at the code and (sure enough) it deliberately stops scanning when the assistent window isn't active. Which is strange, because I can't think of any good reason why?although I clearly had one when I wrote it...
My home folder contains a lot of small files, so the scanning operation is taking a long time (currently 45 minutes for 16.3 Gb and still counting...). I assume this is normal behavior.
I would guess no. I have some folders that I use for testing which contain over a million tiny files, and the assistent can sum them up in less than 10 minutes. So either the scanner is stuck or you have a frightening number of files. If you really do have a few million files, you're just going to have to wait. If not, I'd quit QRecall, relaunch it, and try again, this time leaving the assistent window active. Looking over the code, I realized just how dated it was (written circa OS X 10.4), so I took some time to rewrite the scanner using moden APIs and thread management. The new code is significantly more efficient and won't stop if you switch away from the assistent window. Sadly, you'll have to put up with the way it works now until the the new code migrates into the next beta release. James
|
 |
|
Jochen, Thanks for the bug report. It's a problem with the new scheduler code that tries to determine if two archives share the same physical device (i.e. a single hard drive or a RAID), instead of simply testing if two archives are stored on the same logical volume. The scheduler didn't check for the possibility of a logical volume that doesn't have a BSD/IOKit device name associated with it. I don't know under what circumstances that would be true, but it obviously is for you. And it was great timing, too! I was just hours away from releasing a new beta. I've make the fix to the scheduler and rolled it into the new beta. Expect to see an automatic update available later today. For future reference, you can also send a diagnostic report (Help > Send Report...). The report will include any recent QRecall crash logs along with other OS and hardware configuration information that can make debugging these kinds of problems simpler.
|
 |
|
Chris, The solution will depend on what your goals are. If you just want to create a second, courser-grained, duplicate of your data the most efficient (and convenient) solution is to create a second capture strategy. Let's say you capture "My HD" to the "Upstair" archive every few hours. You could schedule a second capture of "My HD" to the "Downstairs" archive on the Time Capsule to run once a week. Schedule it to merge, compact, and verify about once a month. This would perserve all of your critical data daily, and give you a weekly backup should your primary backup system fail. If you're trying to maintain a complete backup of your regular backup, then simply copying the backup on a regular basis is probably the simplest and most efficient solution. There are a number of file copy/sync/cloning utilities that will schedule a copy to occur on a regular basis (UNIX geeks can do it with cron and cp). You'll want to avoid starting a copy while a QRecall action was in progress so that you don't make a copy of an archive that's in flux. (It's not a disaster if it happens; you'd still preserve most of the data, but you'd probably have to repair the archive before you could use it.) You could use QRecall to capture your primary archive to another archive, but it wouldn't be as fast a straight copy. For one thing, it would mean that you'd have to recall the entire archive from the secondary archive before you could do anything with it. But the other problem is efficiency. That method, and utilities like rsync, encounter the same overhead: The entire source and destination archives have to be read in their entirety, and then any changes detected are written to the new archive. A straight copy reads the source file once and write the destination file once, which is actually quicker. Rsync does work well between two computer systems, where an instance of the rsync program can be started on the remote computer. This is how I keep my off-site backups. I have a couple of modest archives, one on my development system and one on my server (located in a data center across town), where I regularly capture my most critical files—like the source code to QRecall. Once a day I rsync all of them. The archive files are read locally by their respective rsync processes (which is quite fast) and any differences (usually fairly small) are transfered over the Internet. Keeping 100GB of data synced between the two usually takes about a hour each morning. By comparison, it would take almost an entire day to transfer that data through my cable-modem connection. But this only works because rsync can run on both machines; if you run rsync on a Time Capsule rsync will fall back to running one process on your local computer and read all of the remote (Time Capsule) data over the network—which is no faster than a copy. Another good solution is to create rotating backups. Set up two identical capture strategies to different archives, let's call them VaultA and VaultB. Create VaultA on one removable drive and VaultB on a second removable drive. On all of the actions, set the "Ignore if no archive" condition. Now, plug the drive with VaultA into your computer and let it capture your files to it on a regular basis for a week or a month. Unplug the drive and take it to an off-site location, like a safety deposit box, return home and plug in the drive with VaultB. Repeat the process each week or monthly. By retrieving the second drive, you'll have access to all of your captured data going back for years, and if a meteor strikes your home or bank one night (cross your fingers that it's the later), you still have a fairly recent backup of everything. About mid-way down the list of "things I'd like to add to QRecall" are fall-over archives. Basically, the ability to schedule an action that would transfer just what's changed in one archive directly to another archive (or in some other off-site/Internet friendly format). I think that's exactly what you want, but that's going to take some engineering.
|
 |
|
|
|