Message |
|
Gary K. Griffey wrote:In the past, you have indicated that simply re-capturing the file/folder flagged as corrupted should basically return the archive to a healthy condition.
Specifically, recapturing items ensures that you have a recent copy of all of your current files.
The issue here is...how can one direct QRecall to overtly re-capture a file residing in an OSX hidden folder or file?
You don't have to. During the next capture, QRecall automatically detects the situation where the most recently captured copy of a file (regardless of which layer that resides in) has become damaged and will seek it out and recapture it, whether it has changed or not. (Note that this logic has been improved recently, and I consider it foolproof only in the current beta.)
In the normal archive Capture dialog...hidden folders/files are not enumerated....even if you change Finder preferences to reveal hidden files.
If you want to do this by hand, here's the trick (and it works in most any OS X application). The OS X open file dialog has a quasi-hidden navigation feature that will let you open any hidden folder, you just have to know its BSD path. Choose the Capture command. In the open dialog press Shift+Command+G (same as the Finder's Go To Folder... command). A dialog sheet will appear where you can enter the BSD path to the folder you want to see. In your case, type in "/private", click Go, and then select the 'var' folder to capture. This navigation shortcut also responds to path completion (using the Tab key), for those used to using this feature in the Terminal.
|
|
|
Damian Huxtable wrote:Can you recommend a good file splitter for Snow Leopard?
I haven't used the lastest version, but I was always impressed with StuffIt Deluxe's segmented archive feature. Check out StuffIt Deluxe 2011 for a "Mac" solution.
Or a better workaround?
The ever awkward 'split' command-line tool still works its magic. If you don't expect to read the split archive regularly (if ever), you don't need to make a copy of the entire archive, just the 'repository.data' file inside the archive package. So something like
split -b 49m /Path/to/archive/MyArchive.quanta/repository.data /Volumes/OffSiteVolume/OffsiteArchive.data. This command will write the important portion of MyArchive to a series of 49MB files starting with OffsiteArchive.data.aa, followed by .ab, .ac, and so on. To recover the offsite copy, reassemble the OffsiteArchive.data. XX files back into a single repository.data file (cat .../OffsiteArchive.data.* > .../MyArchive.quanta/repository.data) inside any .quanta directory. Then launch QRecall and tell it to reindex the archive. QRecall will reconstruct all of the auxiallary index files from the master repository.data file.
|
|
|
jay gamel wrote:Not a single one in a single place indicated. Why, then does the problem persist?
Well, there must be one somewhere, or at least there was one somewhere. First, let me remind you to restart. The message that you're getting is from launchd, the primary launch services daemon that controls the running and maintenance of most background processes in Mac OS X. launchd is configured entirely by the presence of .plist documents installed in /Library/LaunchDaemons, /Library/LaunchAgents, and/or ~/Library/LaunchAgents (at least for user installable processes, there are more of these folders in /System). Removing the .plist for a service isn't sufficient to stop it. After removing the .plist document, launchd either needs to be restarted (which basically means you need to restart your OS), or you can use a command like this in the Terminal: launchctl stop com.qrecall.monitor If neither the stop command or restarting your computer solves the problem, then you haven't eliminated all of the QRecall related .plist documents. launchd runs on these configuration documents, and it doesn't start services that don't have a .plist configuration document somewhere. You'll probably find the errant file in the ~/Library/LaunchAgents folder of some other user. If you find them in another user account, trash them and restart.
|
|
|
Jay, It would appear that QRecall was installed at some point, but never completely uninstalled. You can manually uninstall QRecall. The steps for doing so are in the QRecall help, but since you probably don't have the QRecall application handy, here they are again: To Manually Uninstall QRecall: 1) Stop all running actions and Quit the QRecall application. 2) Delete the QRecallMonitor Login Item from your account preferences (Mac OS X 10.4 only) 3) Delete any files beginning with com.qrecall from the /Library/LaunchDaemons, /Library/LaunchAgents, and/or ~/Library/LaunchAgents folders. 4) Restart your computer. This should eliminate the problem that you're encountering. You can continue, if you want to be thorough. 5) Delete the /Library/Application Support/QRecall and/or the ~/Library/Application Support/QRecall folders. 6) Delete all files in ~/Library/Preferences that have names beginning with com.qrecall. 7) Delete the ~/Library/Preferences/QRecall folder. 8) Delete the ~/Library/Contextual Menu Items/QRecall CM plugin item. 9) Delete the QRecall application.
|
|
|
Damian Huxtable wrote:Is it possible to combine two archives?
Not at this time.
|
|
|
Prion wrote:but I assume this is harmless. Correct?
Probably. The details of that error message will tell you the name of the extended attribute that couldn't be read and the BSD error code reported by the OS as to why it couldn't be read. If you want, send a diagnostic report (Help > Send Report) and I'll take a look at it. Extended attributes are usually small, non-essential, bits of extra data attached to a file or folder. They may be important, but often are not. The reason you can't read them could be some restriction (via an access control list) on reading that attribute, or the attribute is malformed (I'm not sure how that would happen), it could have been a race condition (the attribute was deleted before QRecall had a chance to read it), or the directory structure of the volume could be damaged. If it continues to happen on the same file, then I'd start by repairing the volume using Disk Utility or something like Disk Warrior. If that doesn't fix it, you might consider deleting that file or stripping it of its extended attributes (cp -X). If you're interested in investigating the problem, you could try using the xattr command-line tool to list the extended attributes (xattr -l file) of the file and see if you get the similar problem reading them. If you do, try to delete the offending one (xattr -d attr-name file). There's no man page for xattr; use xattr -h to get its meager help info. If it doesn't happen again, then it might have simply been some other process changing the attributes at the moment that QRecall was trying to capture them. That can happen, and there's no way to protect against it other than to take another capture.
|
|
|
Richard Kontra wrote:I am attempting to capture my home folder using the Capture Assistant. I have noticed that if Capture Assistant is not the active application, disk activity (I think it is scanning my home folder for files) ceases. I was wondering if this behavior is by design or is there some setting that will allow background processing?
Oddly, it is by design. I say oddly, because I went and looked at the code and (sure enough) it deliberately stops scanning when the assistent window isn't active. Which is strange, because I can't think of any good reason why?although I clearly had one when I wrote it...
My home folder contains a lot of small files, so the scanning operation is taking a long time (currently 45 minutes for 16.3 Gb and still counting...). I assume this is normal behavior.
I would guess no. I have some folders that I use for testing which contain over a million tiny files, and the assistent can sum them up in less than 10 minutes. So either the scanner is stuck or you have a frightening number of files. If you really do have a few million files, you're just going to have to wait. If not, I'd quit QRecall, relaunch it, and try again, this time leaving the assistent window active. Looking over the code, I realized just how dated it was (written circa OS X 10.4), so I took some time to rewrite the scanner using moden APIs and thread management. The new code is significantly more efficient and won't stop if you switch away from the assistent window. Sadly, you'll have to put up with the way it works now until the the new code migrates into the next beta release. James
|
|
|
Jochen, Thanks for the bug report. It's a problem with the new scheduler code that tries to determine if two archives share the same physical device (i.e. a single hard drive or a RAID), instead of simply testing if two archives are stored on the same logical volume. The scheduler didn't check for the possibility of a logical volume that doesn't have a BSD/IOKit device name associated with it. I don't know under what circumstances that would be true, but it obviously is for you. And it was great timing, too! I was just hours away from releasing a new beta. I've make the fix to the scheduler and rolled it into the new beta. Expect to see an automatic update available later today. For future reference, you can also send a diagnostic report (Help > Send Report...). The report will include any recent QRecall crash logs along with other OS and hardware configuration information that can make debugging these kinds of problems simpler.
|
|
|
Chris, The solution will depend on what your goals are. If you just want to create a second, courser-grained, duplicate of your data the most efficient (and convenient) solution is to create a second capture strategy. Let's say you capture "My HD" to the "Upstair" archive every few hours. You could schedule a second capture of "My HD" to the "Downstairs" archive on the Time Capsule to run once a week. Schedule it to merge, compact, and verify about once a month. This would perserve all of your critical data daily, and give you a weekly backup should your primary backup system fail. If you're trying to maintain a complete backup of your regular backup, then simply copying the backup on a regular basis is probably the simplest and most efficient solution. There are a number of file copy/sync/cloning utilities that will schedule a copy to occur on a regular basis (UNIX geeks can do it with cron and cp). You'll want to avoid starting a copy while a QRecall action was in progress so that you don't make a copy of an archive that's in flux. (It's not a disaster if it happens; you'd still preserve most of the data, but you'd probably have to repair the archive before you could use it.) You could use QRecall to capture your primary archive to another archive, but it wouldn't be as fast a straight copy. For one thing, it would mean that you'd have to recall the entire archive from the secondary archive before you could do anything with it. But the other problem is efficiency. That method, and utilities like rsync, encounter the same overhead: The entire source and destination archives have to be read in their entirety, and then any changes detected are written to the new archive. A straight copy reads the source file once and write the destination file once, which is actually quicker. Rsync does work well between two computer systems, where an instance of the rsync program can be started on the remote computer. This is how I keep my off-site backups. I have a couple of modest archives, one on my development system and one on my server (located in a data center across town), where I regularly capture my most critical files—like the source code to QRecall. Once a day I rsync all of them. The archive files are read locally by their respective rsync processes (which is quite fast) and any differences (usually fairly small) are transfered over the Internet. Keeping 100GB of data synced between the two usually takes about a hour each morning. By comparison, it would take almost an entire day to transfer that data through my cable-modem connection. But this only works because rsync can run on both machines; if you run rsync on a Time Capsule rsync will fall back to running one process on your local computer and read all of the remote (Time Capsule) data over the network—which is no faster than a copy. Another good solution is to create rotating backups. Set up two identical capture strategies to different archives, let's call them VaultA and VaultB. Create VaultA on one removable drive and VaultB on a second removable drive. On all of the actions, set the "Ignore if no archive" condition. Now, plug the drive with VaultA into your computer and let it capture your files to it on a regular basis for a week or a month. Unplug the drive and take it to an off-site location, like a safety deposit box, return home and plug in the drive with VaultB. Repeat the process each week or monthly. By retrieving the second drive, you'll have access to all of your captured data going back for years, and if a meteor strikes your home or bank one night (cross your fingers that it's the later), you still have a fairly recent backup of everything. About mid-way down the list of "things I'd like to add to QRecall" are fall-over archives. Basically, the ability to schedule an action that would transfer just what's changed in one archive directly to another archive (or in some other off-site/Internet friendly format). I think that's exactly what you want, but that's going to take some engineering.
|
|
|
Currently, each action is saved as an individual .qraction file in <your home folder>/Library/Preferences/QRecall/Actions. Backup that folder and you've saved all of your actions. You can restore actions by placing .qraction files in this folder, but you have to get the scheduler rescan the folder before they will appear in QRecall. The simplest way is to restart the computer or the QRecallScheduler process. If you don't want to do that, open QRecall and do anything that adds or removes an action, such as duplicating an existing action and then deleting it. Whenever actions are added or removed, the scheduler re-reads all of the action documents it finds there.
|
|
|
Cody, I think everything you're interested in will be in 1.3, which will focus heavily on filtering and automation.
|
|
|
Cody Frisch wrote:I think it would be nice to see the "actions" list available through a menubar item, easily suspend, stop, reschedule, and pause.
That's an excellent suggestion. I'm adding to the wish list now.
Also I'd love to see "grouping" in the actions list. Mostly for organization purposes since it shouldn't matter beyond that really. Just group by archive, though obviously once can sort now, it would be nice to have everything for one archive then sorted by capture, merge, compact, etc.
I've had similar requests to this in the past, but I'm loath to add complexity unless it also adds functionality. As for grouping actions, most installations of QRecall have, at most, six to eight actions. I'd consider some kind of grouping if it solved some specific problem, but right now my thinking is "how lost can you get with eight items?" Would something like the ability to sub-sort on other columns be sufficient? That is, sort first by archive and then by action (or next run time, or schedule, ...).
I'd also love to see "end actions". When one action completes it automatically calls another action. Rather than relying on the scheduler having a different time.
Similarly, if you could describe what you're trying to accomplish that the current interface can't do, I'll be more than eager to investigate a new interface or feature.
|
|
|
Paul, Thanks for the suggestion and the feed back. I'm not entirely sure that what you're asking for is technically feasible, but I'll add it to the wish list. I realize that there need to be other means of accessing the data in an archive, and this is a priority for future versions.
|
|
|
Gary K. Griffey wrote:It seems that the directory that contains the VM package files is not being updated when a virtual machine inside of it has been updated...
I might have explained this earlier, if not ... Modifying a file in a folder (directory) does not change the modification data of the directory. The modification date of a directory indicates that the contents of that directory "file" has changed. (In UNIX, a directory is conceptually a file containing metadata about other files.) If you create, delete, or rename an item in a directory this changes the logical contents of the directory "file", which in turn updates it's modification date. Merely changing the contents of a file within that directory does not change the directory, and thus does not change the directory's modification date. It only changes the modification date of the file that changed. There are a variety of reasons it works this way, some abstract (UNIX's "everything's a file" philosophy) and some practical (the less that has to change the faster changes can be made). Now, I'm sure you're wondering why sometimes when you save a file the modification date of the enclosing folder changes. The reason is that many applications use a technique called a "safe save" to write documents. The contents of the document are written to some temporary, often invisible, location on the drive. When the document has been completely saved, the original document is deleted and the new document is moved into its place. This avoids the possibility of destroying the only good copy you had of the document if the save operation failed to complete successfully. The acts of deleting and renaming the document files are a directory changes, which naturally update the modification date of the enclosing folder. Really huge files, like VM images and QRecall archives, can't be updated in this manor; it's just not practical. Updates to those kinds of files only change the data within the file and don't touch the enclosing folder. Now you may notice that the last modified date of a QRecall archive changes when it's updated, and that's because QRecall goes out of its way to update (touch) the enclosing package folder when its done. This is so that both you and Spotlight see that the contents of the archive have changed. However, software is not obligated to do this and a folder's modification date can remain unchanged for months, even though its contents are changing constantly. If you're curious, you can see this in the Finder too. Create a folder and save a small text file (e.g. TestFile.txt) inside that folder. In the Terminal, use a command like "echo 'hi' >> TestFile.txt" to append some new data to that file. The modification date of the file will change, but the modification date of the folder will not.
|
|
|
Gary K. Griffey wrote:I guess as far as running incremental captures with QRAuditFileSystemHistoryDays = 0.0 is concerned...my concern is that there could be other folders/files that are also being inadvertently "missed" by FSEventsd...it just seems odd to me that a virtual machine folder is the only one that could experience this issue...although I have no proof, of course, that other folders are not being backed-up...
As I've mentioned, virtual machines do not interact with the file system the way other software does. It's been my experience that file system events is reliable, which is why I was initially very reluctant to implicate it as the source of your problems. If it weren't I'd have a lot of complaints from other QRecall users, and thousands of Time Machine users would be tearing up Apple's support forums; neither of which have happened.
Certainly, the overhead for performing the full deep system scan each time can be huge...no doubt...possibly, if you believe this to be isolated to virtual machine folders only...
It's not horrendous, but it will add 10-20 minutes to each capture.
I will take your advice and split this into 2 backup actions...one for the entire volume that specifically excludes the single VM directory...and another that only includes the VM directory...since you stated that the named target directory itself is always "deep scanned"...and only sub-directories employ the FSEventsd logic.
Don't exclude the VM directory from the full capture. Excluding an item treats the item as if it didn't exist, which will create a layer where your VM folder doesn't exist at all. Keep your current action that captures the entire volume, then create a second action that captures just the VM folder. Schedule the new action to run immediately after the first. And please keep me posted on any future developments...
|
|
|
|
|