Message |
|
Mark Gerber wrote:Some of the programs I use recommend their databases not be backed up while the program is running. Specifically, I'm thinking of DEVONthink Pro, PowerMail, and SOHO Organizer (which uses OpenBase).
That's correct.
... And, of course, I'd like to capture these files a few times during the day. It's my impression the potential for damage is due to the fact that these databases needed to be closed before copying, otherwise an incomplete file will be written.
Also correct. This was discussed some time back in the " QRecall and CoreData" thread.
For this purpose, does QRecall do anything different in capturing quanta so I wouldn't have to be concerned about quitting the program to ensure a complete, undamaged capture?
Not at the moment, but I have plans to address this (and similar problems) in an upcoming release. To specifically address the issue of databases, I'm planning a new filter option that will ignore a folder full of files if any of those files are currently open for modification (write access). The capture would always be "safe" in that the capture would only occur if all of the files are closed.
By the way, I just found the screencasts on your home page. They are well done and present the information very clearly. I look forward to seeing more, in particular, one that clarifies the rolling merge options.
I wanted to do one for rolling merges too, but it needs some wickedly complicated animation and my Final Cut Express skills weren't up to it. If I get some time to extend the series, that will be the first one I attack.
|
|
|
Bruce Giles wrote:Note that this system is still running Tiger Server, not Leopard Server, if that makes a difference.
Whoops, that's make a huge difference. I inadvertently linked the QRTouchXAttrItems tool against the 10.5 SDK instead of the 10.4 SDK. Download QRTouchXAttrsXItems and try it again. This version should work on 10.4 and 10.5.
|
|
|
Bernard LECLAIR wrote:Do you plan to support other languages (French, spanish, german...)
Bonjour, I would love to localize QRecall to other languages, but the resources and time required to translate it aren't currently available. I've put localization on the to-do list for version 1.3 (which is tentatively scheduled for the summer of 2009). I'll seriously look into it again then. If anyone else would like to see QRecall translated into another language please let me know and cast a vote for what language, or languages, you'd like to have.
|
|
|
Bruce Giles wrote:First of all, congratulations on the release of version 1.1! Today, I upgraded our XServe running Tiger Server to QRecall 1.1. Everything seems to have worked perfectly.
That's good news.
... After it completed, the archive window reported that the size of the captured layer was about the same as typical recapture runs under 1.0.1. But the number of items captured was over 7000, where it's typically no more than around 300. Is this because it picked up (captured) extended attributes that weren't captured in 1.0.1?
That's very likely. The rules that determine when an item is recaptured changed subtlety between 1.0 and 1.1. It's very likely that you have items with directory information that triggered 1.1 to recapture them. One significant change is that 1.1 will now recapture an item if its attribute modification date changes, even if none of its attributes actually changed. In these cases, QRecall will recapture the item and store a new metadata record for that item. Since the contents of the files was unlikely to have changed, no new file data is added to the archive — just a new metadata record. Note that QRecall 1.1 won't recapture an item just because it has extended attributes and the previously captured version doesn't. See the " Utility to recapture items with extended attributes" thread for details.
Does my upgraded archive now contain everything that it would have had if I had started a new archive instead of upgrading the old one?
I don't know how may files you have on that volume, but I'll guess it's more than 7,000. I'm sure there are lots of files which weren't recaptured. If true, then the latest layer is certainly busy, but probably doesn't contain every single item on the volume.
|
|
|
Warren Michelsen wrote:I have a QR archive which QR says is bad.
First of all, please send a diagnostic report (QRecall version 1.1 or later, choose Help > Send Report...). I'm always interested in damage archives that don't get automatically repaired by the next action or that are damaged for obvious reasons, like a corrupted volume or a failing drive.
I selected the option to recover to a new archive. When recovering, does QR move data from the old to new archive or does it copy those data?
With the copy option unchecked, QRecall will repair the archive in situ. Any corrupted data will be erased and the recoverable data is reassembled into a usable archive. With the copy option checked, the recoverable data is transferred into a new archive and the original is untouched. There must be enough space on the new archive's volume to contain a copy of all of the recoverable data from the damaged archive.
The reason I ask is: There is only 71 GB of free space remaining on the QR archive volume but the archive itself is 398 GB. Clearly there is not enough room on the archive volume to recover much if data are copied instead of moved to the new archive.
You won't be able to repair using the copy option, unless you find another volume with at least 398GB of free space. Try to repair the archive with the copy option off. The copy option is really for special circumstances (such as when the damaged archive is on a failing drive or a read-only volume) or for when you might want to repair the archive several times with different options. If you choose to repair but not copy, any damaged data and any unrecovered data is erased. The possible downside is if you choose not to recover orphaned or partial files. These files will also be erased during the repair, and once erased you don't have the option of running the repair again to get them back. But orphaned and partial files are for extreme cases where you must absolutely recover every possible scrap of salvageable data. If you just want to get the archive back into shape so you can start capturing again, leave them off.
|
|
|
An update: The latest version of QRecall will force the Activity window into all spaces. This can now be turned off using the QRMonitorSpacesJoinAll expert setting. See the Advanced QRecall Settings thread for the details.
|
|
|
ubrgeek wrote:Seems to be working now. Odd
Since Delete Item is an interactive command, there will be a pause following the delete action while QRecall re-reads the archive and updates the window. So it could take several seconds before the item actually disappears from the display.
|
|
|
ubrgeek wrote:Where is this functionality?
Select one or more items in the archive browser window, then choose Archive > Delete Item... You must be running 1.1.0(33) beta or later.
|
|
|
Christian Roth wrote:Is there a way to optimize that in some way to read and write larger chunks? I fear not in that the access offsets will probably be random in nature, and caching the whole file in memory will not be a solution (though technically possible in my case since I have enough internal RAM to hold the complete file).
Until I update QRecall to run in 64-bit mode, caching the hash.index isn't an option (it's an address space issue, more than a physical RAM issue). I've looked at several techniques for speeding up hash.index file access over the years, as it's one of the biggest performance bottlenecks in the system. The problem is trying to second guess the OS, which is already doing its own optimization. Local disk systems and network volumes all implement their own caching and read-ahead optimization. Some work extremely well with QRecall while others drag it into the mud. Implementing my own caching and read-ahead optimization may speed up the worst cases, but would probably slow down the best ones.
Do you know in advance what percentage of the file needs to be rewritten, so one could estimate if reading into memory, modifying, writing back as a whole may be faster than scattered individual file accesses?
That's a good question, and is one technique that I plan to revisit again in the future. Speeding up the quanta and names index are high on my list of optimizations.
The archive probably got corrupt either because a user in the family shut down its Mac while a capture was in progress or another user in the family (now, that's me...) fiddled with the network settings of the NAS the archive lives on while a capture was in progress.
99% of the time, shutting down a system before it can complete a capture should not cause any problems. The next action should auto-repair the archive and continue normally. On the other hand, I can't predict what effect "fiddling" with the network settings will have.
I'll see if I can wait long enough for the hash.index update to finish or if it will be faster to fetch the archive from the networked volume to local disk, indexing there, then moving it back to the NAS.
I suspect that just letting the reindex run its course will be pretty close to the optimal speed. If you feed adventurous and have enough local disk space, you could copy the archive from the NAS to a local drive, reindex it, then copy back just the repaired index files back into the original repository package. That works because the Reindex command does not alter the primary repository.data file, although you'll have to be careful that nothing tries to update the original archive while you're doing this. That might be faster -- can't say for sure because it involves a lot of additional copying.
|
|
|
Christian Roth wrote:I am seeing the issue that after what looks like a mostly finished Reindex of my archive, the process stalls in the "Reading layers" stage. The QRecallHelper application is still actively doing something, and I checked using Instruments that it is alternatingly reading and writing chunks of 12 bytes in size to a single file.
QRecall is updating the quanta hash. The "Reading layers" message is an artifact. It just happened to be the last progress message that got put up before the reindex finishes. Just before the reindex command begins the process of closing the archive it cleans up the quanta index (mostly to get back memory), which is where it appears to be stuck. I'll make a note to insert an additional status message in there to so it's more obvious what's really happening. If QRecallHelper is reading and writing 12 byte records, then it's probably doing what it's supposed to be doing, which is writing all cached hash records to the hash.index file. The hash.index file never changes size. It's the largest data structure in a QRecall archive and consists mostly of a huge array (i.e. tens of millions) of 12 byte records. It allows QRecall to quickly find any quanta in the database. Normally, flushing the records to the hash doesn't take more than a few minutes. However, it can be influenced greatly by the CPU, amount of RAM, archive access speed, competing processes, etc. I have a MacMini that captures to a 1TB archive that can get stuck updating its quanta index for hours (in fact, it's upstairs doing that right now).
Is there any actual progress even being made? What is getting read (and written back?) to hash.index at this stage,
Given that you're seeing the QRecallHelper process continue to read and write 12 byte records, I'm pretty confident it isn't stuck. However, I've been wrong before.
and why is it taking so long?
That's a complex question with a lot of variables. In my experience, one of the biggest factors is speed of access to the archive. If the archive is on a networked volume or USB connection, the overhead of reading and writing lots of tiny records can be high which can dramatically slow the process of updating the hash. Hopefully by the time you read this QRecall has finished and moved on. In the unlikely case that it really is "stuck," take a sample of the QRecallHelper process and send it to me along with a diagnostic report (Help > Send Report...). You can obtain the process sample in Activity Monitor by locating the running QRecallHelper process and clicking on Sample Process, then saving the results to a text file. If you're a command-line fan, you can use the 'sample' tool to do the same thing. One last question. What prompted you to reindex the archive in the first place?
|
|
|
Mark Gerber wrote:Do the claims of space-saving efficiency apply to graphics files, too?
In the interest of full disclosure, I have to say "it depends," but in your case the answer is "yes."
For instance, if I add a layer or two to a 300 MB Photoshop or Painter file, will QRecall only back up those layers so that I don't end up with two 300 MB files? Or if I make changes to an existing layer, are only those changes added to the back up file?
This is exactly why I wrote QRecall. In fact, Photoshop documents were used as the first test files for QRecall. Photoshop and similar graphics applications tend to write the layer data sequentially in the document file. Inserting a layer tends to just "push" the data in the other layers to a new position in the document. QRecall can detect this "shifted data," but by default it doesn't look for it. If you try QRecall, make sure you bump the "Shifted Quanta Detection" in the Archive Settings up a notch or two.
Is it practical to have QRecall back up these as files as I'm working?
Yes. Photoshop and similar applications write documents in their entirely when you save them, so the contents of the documents aren't in flux while you work. This means that saved documents are captured completely. The only potential problem would be if QRecall was capturing an item at the same instant that you were saving it. Regardless, the next capture would absolutely capture the current version.
When I work with Painter and, because I have had reason in the past not to trust their native RIFF format, I typically save iterative versions of a working file. At the end of the day, I might have an additional five or ten or fifteen saved files. I'll usually delete all but the last three or four and keep those until the next day when I start again, repeating this until the project is finished (which can be anywhere from one to six weeks after it's started), at which time the final version is archived to DVD and the iterative versions deleted. So it might look like this:
(clip) QRecall won't have any problem with that at all. It will see most of these files as semi-duplicates of each other and store only one copy of the data.
I'm trying to figure out a back up strategy that will give me the protection I need without filling up a drive too quickly.
Given your work flow, I'd say that QRecall is perfectly suited. But don't take my word for it. Try QRecall and see how it works for you. After using it for awhile, check the log and look for the capture action details. The log will record how efficiently QRecall is detecting duplicate data. For the best performance, set up two archives. One archive to backup your whole system. Have that archive capture the entire startup volume every night. Then set up a second archive just for your working project files. The second one can be set up to capture repeatedly during the day. The small size of the archive and capture target ensure maximum performance a minimum interference with your work. And remember to set the Shifted Quanta Detection on the project archive. This is exactly how I have QRecall configured. One archive backs up my entire development system once a day, and a second archive is set up to capture my QRecall project files every 20 minutes between 6AM an 9PM.
|
|
|
To Manually Uninstall QRecall: - Stop all running actions and Quit the QRecall application. - Delete the QRecallMonitor Login Item from your account preferences (Mac OS X 10.4 only) - Delete any files beginning with com.qrecall from the /Library/LaunchDaemons, /Library/LaunchAgents, and/or ~/Library/LaunchAgents folders. - Restart your computer. - Delete the /Library/Application Support/QRecall and/or the ~/Library/Application Support/QRecall folders. - Delete all files in ~/Library/Preferences that have names beginning with com.qrecall. - Delete the ~/Library/Preferences/QRecall folder. - Delete the ~/Library/Contextual Menu Items/QRecall CM plugin item. - Delete the QRecall application.
|
|
|
Mark Gerber wrote:Will QRecall play nice with itself or do I need to make sure the schedules for each of the computers don't overlap? I'd guess that's less a concern when it's writing to three different archives, but what about if I choose to add licenses so they are writing to one?
Using separate archives simultaneously is no problem. You may encounter some slow down as multiple systems vie for drive and network bandwidth. QRecall will arbitrate between multiple actions or systems trying to use a single archive at once. The other scheduled actions will simply wait until the first one is done.
|
|
|
Mark Gerber wrote:I plan to get an external drive that will be used for our back up files--I guess this can be something like Apple's Time Capsule (is this an NAS drive?) or one that can be attached to one of the computers and accessed by all through the network.
Yes, the Time Capsule is a type of NAS drive. A NAS drive or a drive on one system that you share would be equivalent.
Would I need QRecall running on each computer and then on each user's account (as many as six copies) to back up everything to this external drive?
That's not necessary. If you install QRecall in an administrative account on each computer and then pre-authorizing it to use administrative privileges, it will be able to capture all of the files for all of the users on that system.
If I understand correctly, I can use one license for all our back up needs, but that creates one database for each user (or more in this case?).
Reusing the same identity key for each system, create a separate archive for each system. The one installed copy of QRecall on each system would capture everything on that machine to its own archive.
To make the most efficient use of the space on the hard drive, how many licenses would you recommend?
To take full advantage of QRecall's ability to merge duplicate data (multiple copies of the operating system, applications, music, ...) you need to capture all three systems to a single archive. That would require three identity keys, so that QRecall can keep the files from all three systems separate in the archive. I'd suggest you just start with a single identity key, capture the three systems to three different archives, and see how it goes. If you start to run out of disk space you can always turn on compression and/or buy additional licenses to share a single archive later.
|
|
|
Steve Mayer wrote:I've got OSX configured to have double arrows at the top and bottom of the scroll bar.
That's probably it. You are correct, if the scroll bar region gets too small to fit the scroll bar control simply disappears. I thought I had set the minimum size of the layer pane large enough to prevent this, but I didn't consider double arrows at the top and bottom of the scroll bar.
|
|
|
|
|