Adrian,
Thanks for posting your problem and for the diagnostic report.
My analysis is that you're having data integrity problems with your NAS drive. My first guess would be cross-linked files, which can cause writes into one file to pollute a different file with unrelated data.
The first problem was encountered on 2012-06-28 14:08 when the verified failed:
2012-06-28 15:44:17.686 Archive contains unused space that has not been erased
2012-06-28 15:49:13.313 Pos: 245397938824
2012-06-28 15:49:13.313 Length: 1555152
Under normal circumstances, QRecall fills each unused area of the archive with zeros. It then wraps a small record header around each area to mark where it begins and how long it is.
During the verify, QRecall confirms that all unused areas of the archive are, indeed, still filled with zeros. In your case, there was something other than zeros in that region of the archive. This indicates that some other process has written data into that region of your archive or the sectors that store that data have become damaged or corrupted.
Subsequent attempts to repair the archive show similar problems, but this time the damage was in regions of the archive that contained real data:
2012-06-28 17:53:37.933 2464 bytes at 248688532320
2012-06-28 17:54:20.271 13601632 bytes at 250084036664
2012-06-28 17:55:19.349 32744 bytes at 252668562552
2012-06-28 17:55:19.349 16 bytes at 252668595320
2012-06-28 17:58:06.164 4104 bytes at 259718257104
The next repair reported more lost data:
2012-06-28 21:00:20.971 16400 bytes at 245687567768
2012-06-28 21:00:53.052 20280 bytes at 246963311616
2012-06-28 21:00:53.395 2664 bytes at 246963332080
The really important thing to note here is that the file positions in the
second report are at positions before the ones in the first report. If the data on the drive was stable, the first repair would have reported these errors too. Instead, the first repair read this area of the archive and found it to be sound, but the next time your ran the repair the previously sound area now has data errors.
There are two explanations for this.
The drive system could simply be dying a slow death, and is randomly scrambling or corrupting data.
More likely is the volume structure of the NAS is damaged. If the allocation map is invalid, it could cause newly written data (i.e. the index files that get recreated during the repair) to inadvertently overwrite perfectly good data in the primary archive data file. In effect, the repair is destroying the archive.
The solution to the later problem is to use a disk repair utility to ensure the volume structure of the NAS is OK.
As you continued to run more repair actions, QRecall continued to find new area of your archive that was damaged. A damage to a few records can indirectly impact the contents of multiple layers, so it won't take long before most of the layers in your archive are affected by at least one of these problems. You can review the repair log to discover exactly which files/folders were affected (in most cases, it was just a few files that were lost during each repair).