Richard,
First, a confession. I'm fully aware that QRecall has performance problems when the size of the archive begins to exceed 1TB. The basic problem is that QRecall must compare every block of data being added to the archive with the data that's already stored in the archive. This various indexes and tables used to do this are very efficient when the archive is modest (500GB or less), but begin to degrade when the archive exceeds 1TB. Addressing this performance issue is one of the main goals of QRecall 1.2.
So the fact that things are slow isn't surprising. By the way, you never mentioned how big your archive is.
However ...
Richard Morris wrote:I continually get "A storage or disk error occurred......the archive is probably damaged" error messages, particularly from the iMac. The Mini seems much less prone to problems and after several attempts I succeeded in backing it up.
Slow is one thing, but you shouldn't be getting corrupted archives. Please send a diagnostic report (Help > Send Report) from both systems so I can take a look at the cause of these errors in more detail. There may be something else going on.
In desperation I have broken the backup job into 4 actions and the second one just failed.
My suggestion would be not to try to subdivide the capture (although there are good reasons to do that too), but to limit the amount of time the capture works by setting a stop condition. QRecall captures are always incremental, so I recommend setting up a single capture that starts late and night and automatically stops in the morning. You can do the same for other long-running actions, like a compact.
The idea is that the capture might not finish, but it will save its work and finalize the archive. This is important because the auto-repair feature works by reverting changes back to the last completed action. By successfully adding data in smaller increments, any possible failure in the future doesn't have as much to recapture. The next capture will always pick up where the last one left off.
Running a rebuild/reindex after a failure takes 24 hours so not something that is practical to do after every second backup attempt.
When you get a "archive is probably damaged" message, are you letting another capture or verify action run first before trying to repair the archive? Depending on what kinds of problems you're encountering, QRecall is able to auto-recover from most kinds of capture failures. It does this automatically at the beginning of the next action.
The verify action/command is particularly useful in this respect. A verify will first auto-repair the archive before the verify begins. If a damaged archive can be auto-repaired, the verify will do it. Just watch the verify progress. Once it starts verifying the contents of the archive, it has successfully auto-repaired the archive (assuming it needed it) and you can stop the verify. If the archive can't be auto-repaired, it will immediately report a problem.
Is any one else successfully backing up to multi TB archives, which after all, are not that uncommon these days?
[
Raises hand] I keep a 1.6TB archive here for testing. I use it mostly to stress test QRecall.
It seems very slow once the archive is large.
I freely admit that my 1.6TB archive is as slow as molasses on a cold day.
Do these speeds sound right for other users? The LAN can easily handle the top speed of a USB drive (30+MB/s).
My speeds are better than yours, but you're never going to get stellar performance from this arrangement. (That's not to say that it couldn't be better). A 30MB/s transfer rate is not going to be terribly fast for a large archive for a number of (technical) reasons. Add in network and file server latency and it's going to slow that down even more. So given these circumstances, that sounds about right.
Any suggestions would be welcome.
My suggestions would be (1) create time-limited captures, (2) run verify after a failure (or just let the next action run as scheduled) to see if the archive can be auto-repaired, (3) send in a diagnostic report, (4) schedule a verify to run once a week, and (5) try capturing to separate archives.
The last one is really one of desperation, but will probably avoid most of the problems you've encountered. You could, for example, set up four archives: one for each internal drive and one for each external drive. The real question is how much duplicate data you have between the four drives. If you have 100s of GBs of data duplicated on the two Macs, then I can see how you'd like to have a single archive. However, if most of data on each system is unique, then most of the duplicate data is going to be from one capture to the next, not between systems. In the latter case, multiple archives will be nearly as efficient and much faster
I suppose it is possible there is some interaction with the WHS going on but it is patched to date and has performed flawlessly for the last 18 months.
That's possible, but I need to see more evidence. I'm naturally skeptical of claims about "flawless" performance because most drive and network errors go unnoticed.
For anyone who's interested, QRecall currently imposes a 2TB limit on the size of archives. This limit is imposed by the size of the data structures needed to manage an archive of that size. I plan to increase this limit greatly in the future, but it will require a 64-bit version of QRecall.