Message |
|
Tellan, If I understand your question correctly, QRecall does not skip items during a recall. It will skip over items that it is reasonably sure have not been modified during a capture action, but the same rules can't be applied during a recall action. When restoring items, QRecall replaces items in their entirety with the version captured in the archive. To improve performance, it will read existing items and compare their data with the version in the archive. The existing item will be overwritten only if it is different, but the ultimate result is that every items on the destination volume will either be rewritten or compared in its entirety. During a recall, QRecall does not skip items that merely appear to be the same. An existing item's metadata is insufficient to guarantee that the item is identical to the version stored in the archive. The only way to be sure that the restored items are identical to their captured versions is to write, overwrite, or compare all of them.
|
|
|
Adrian Chapman wrote:Unfortunately the Drobo-FS has its own internal file system and from what little I know, the method of checking can be destructive of data! Great!
From the sounds of this Drobo support article you might be able to fix it with OS X.
|
|
|
Just a follow up: While reviewing the rest of your diagnostic report, I noticed this error during your last verify action:
2012-06-30 14:07:41.546 Failed
2012-06-30 14:07:41.546 cannot read file
2012-06-30 14:07:41.546 Error: I/O error (bummers)
2012-06-30 14:07:41.546 Length: 524288
2012-06-30 14:07:41.547 Pos: 242321696144
2012-06-30 14:07:41.547 File: repository.data
An I/O error means the drive was physically unable to read something from the drive. It could mean hardware problems, or could also be caused by a corrupted volume structure (a file position get translated into a track/sector that doesn't exist and the drive responds with an I/O error). Because of the ambiguity of I/O errors, it doesn't really narrow down what the exact problem is. But it is one more piece of evidence that points to a problem with the NAS.
|
|
|
Adrian, Thanks for posting your problem and for the diagnostic report. My analysis is that you're having data integrity problems with your NAS drive. My first guess would be cross-linked files, which can cause writes into one file to pollute a different file with unrelated data. The first problem was encountered on 2012-06-28 14:08 when the verified failed:
2012-06-28 15:44:17.686 Archive contains unused space that has not been erased
2012-06-28 15:49:13.313 Pos: 245397938824
2012-06-28 15:49:13.313 Length: 1555152 Under normal circumstances, QRecall fills each unused area of the archive with zeros. It then wraps a small record header around each area to mark where it begins and how long it is. During the verify, QRecall confirms that all unused areas of the archive are, indeed, still filled with zeros. In your case, there was something other than zeros in that region of the archive. This indicates that some other process has written data into that region of your archive or the sectors that store that data have become damaged or corrupted. Subsequent attempts to repair the archive show similar problems, but this time the damage was in regions of the archive that contained real data:
2012-06-28 17:53:37.933 2464 bytes at 248688532320
2012-06-28 17:54:20.271 13601632 bytes at 250084036664
2012-06-28 17:55:19.349 32744 bytes at 252668562552
2012-06-28 17:55:19.349 16 bytes at 252668595320
2012-06-28 17:58:06.164 4104 bytes at 259718257104 The next repair reported more lost data:
2012-06-28 21:00:20.971 16400 bytes at 245687567768
2012-06-28 21:00:53.052 20280 bytes at 246963311616
2012-06-28 21:00:53.395 2664 bytes at 246963332080 The really important thing to note here is that the file positions in the second report are at positions before the ones in the first report. If the data on the drive was stable, the first repair would have reported these errors too. Instead, the first repair read this area of the archive and found it to be sound, but the next time your ran the repair the previously sound area now has data errors. There are two explanations for this. The drive system could simply be dying a slow death, and is randomly scrambling or corrupting data. More likely is the volume structure of the NAS is damaged. If the allocation map is invalid, it could cause newly written data (i.e. the index files that get recreated during the repair) to inadvertently overwrite perfectly good data in the primary archive data file. In effect, the repair is destroying the archive. The solution to the later problem is to use a disk repair utility to ensure the volume structure of the NAS is OK. As you continued to run more repair actions, QRecall continued to find new area of your archive that was damaged. A damage to a few records can indirectly impact the contents of multiple layers, so it won't take long before most of the layers in your archive are affected by at least one of these problems. You can review the repair log to discover exactly which files/folders were affected (in most cases, it was just a few files that were lost during each repair).
|
|
|
David Ramsey wrote:Something tells me it's not an SQL-type relational with BLOBs...
No, it's not. Conceptually, it could be. QRecall's archive is organized very much like a database. The structure, however, is a custom one, designed for speed and efficiency in performing the billions of lookups need to recapture files. It's also designed to detect and survive random data loss caused by inadvertent writes, data corruption, or media failure. The latter requirement is why QRecall isn't based on an existing database engine.
|
|
|
David Ramsey wrote:I suppose I can simply delete the lower archive dated 5/14...
Select both owners and choose Combine Items....
|
|
|
David, Here are some thoughts: Did you change your identity key or replace/reformatted a drive around May 19? Go to the very top level of your archive and make sure you only have one owner and one volume. An "owner" is defined by an identity key. Everything you capture belongs to that owner. If you change your identity key, a new owner will appear at the top level of your archive and all items captured after that will belong to the new owner. The same thing can happen to volumes if you replace/repartition a hard drive. If that's the case, you can "fix" this situation with the Combine Items... command. It stitches together the history of two owners or two volumes which actually represent the same item. Also double check your capture actions to make sure they are capturing the right things. Actions keep track of the items they capture using OS X aliases, which are very smart but can occasionally be fooled into referencing the wrong item. If that's the case, simply remove the incorrect items and re-add the items you want to capture. If you still think that items aren't getting captured, or are getting captured but not showing up, post again and we can dig a little deeper.
|
|
|
David Ramsey wrote:I can recall items from either of these layers, but can't find a way to expose later layers. Pulling the shade control lets me exclude the latest layer, but that's it. Obviously I am missing something terribly obvious.
It's not obvious at all, David, or I'm sure you would have found it. Excellent question. The answer is that QRecall is "helping" you by automatically hiding layers unrelated to the items you're currently browsing. There are two ways to see the other layers:
Go browse a different folder, or better yet, a volume.
Go to the View menu and check Show All Layers. You should now see all of the layers in your archive, and you'll notice that all but two are grey. When you browse a particular folder, QRecall finds all of the layers that contain copies of items in that folder. The folder that you happen to be browsing contains items that were captured in two layers. None of those items have been recaptured in any other layers. The remaining layers are dimmed, to indicate that they don't contain anything related to the items in that folder; or if you happen to have the Show All Layers unchecked, they are hidden altogether. The idea is that when you browse a particular folder, the layers list and the graphic view on the left are trimmed down to just those layers that are relevant to the items you're looking at.
|
|
|
Neil Lee wrote:I've got a really large archive (500G+) that I'm trying to open to restore a file from, and when I open it in QRecall the app just hangs. I assume it's crunching away at something but there's no UI, no window, and no feedback telling me as such.
Also, I don't know how long it should take for QRecall to open an archive this size, but it's been sitting there for 20 minutes now with no sign of activity.
Lee, Something is wrong. You should, at the very least, see an empty window while the archive is opening. If the application is locked up, then it probably locked up before it started to open the archive. Please do this for me: Open the Terminal application, and issue the following command.
sudo /Applications/QRecall.app/Contents/Resources/sample.sh
(If your QRecall application is somewhere other than the /Applications folder, then adjust the path accordingly.) Enter your admin password and wait for the script to finish. Force Quit the QRecall application. Do this from the Dock, via Command+Option+Esc, or using Activity Monitor. Launch QRecall again and see if it launches on its own. If so, send a diagnostic report.
Could there be some kind of dialogue window added so it's obvious what QRecall is actually doing?
Absolutely, and that's on the to-do list. Having said that, however, this isn't your problem. When opening an archive, you should immediately see the archive window, and the details will fill out as the initial data is loaded. Even on a very large archive on a relatively slow mediam (i.e. wireless), it still shouldn't take more than 30 seconds before you can start browsing. I just opened a 5 TB archive on my system. The window appeared in approximately 2 seconds, and the browser had loaded and was ready to go in 10. This archive stores data for 4 computers, 10 volumes, and has over 30 million items in over a hundred layers. Admittedly it's on a very fast eSATA RAID-5, but you get the picture.
|
|
|
In the latest Mountain Lion developer preview (DP4), Apple appears to have fixed the preference value retrieval bug that was causing QRecall to crash. QRecall now seems to be running without incident. At this point, I'd encourage anyone who is using the Mountain Lion preview to install and test QRecall.
|
|
|
With the release of QRecall 1.2, the QRecall 1.2 beta program is now closed. A new beta program for QRecall 1.3 will begin soon.
|
|
|
Dawn to Dusk Software is very pleased to announce the release of QRecall 1.2. With a whole new interface, and dozens of new features and improvements, we're sure you'll enjoy using QRecall even more. QRecall 1.2 is free upgrade available to users running OS X 10.6 and later. Choose QRecall > Check for updates… from the QRecall application menu, or download it directly.
|
|
|
Steve, You win this month's obscure bug contest. You are encountering a problem recovering a file with an unusually large extended attribute. The repair logic is getting tripped up by errors in processing the list of data blocks associated with that extended attribute. I think I've addressed the problem and have built 1.2.0(82) alpha for you to try. Install 1.2.0a82 and repair the archive again, and then try another capture. The problem with the extended attribute data appears to be the result of an earlier data corruption. So if you haven't done so already, I'd suggest repairing the volume that contains this archive before you proceed. Please send another diagnostic report afterwards.
|
|
|
Steve Mayer wrote:So, after deleting the .index files, I tried reindexing the archive. I got an error that while the index had completed, the auto-repair had failed. I tried a capture and that failed after about 2%.
Please send another diagnostic report. I really want to look at the results of that reindex and capture. Thanks.
|
|
|
Steve Mayer wrote:Using the latest beta(1.2.0(80)rc), I've gotten into a situation where I'm unable to capture anything to my archive any longer as I seem to be stuck in a cycle of Index is bad, repair index, repair, error Index is bad, repair incomplete actions.
Steve, Your archive has a broken package index file, but I don't know why repair isn't fixing it. First, please open up a Terminal window and issue the following command, copy the output and send me the results.
ls -l /Volumes/SpawnBackup/SpawnSmayerHome.quanta Then you might try opening up the SpawnSmayerHome.quanta package, trashing all of the .index files, and then attempt to repair the archive. In the mean time, I'll look at your log file records in detail.
|
|
|
|
|