QRecallDownloadIdentity KeysForumsSupport
  [Search] Search   [Recent Topics] Recent Topics   [Hottest Topics] Hottest Topics   [Groups] Back to home page 
Messages posted by: Christian Roth
Forum Index » Profile for Christian Roth » Messages posted by Christian Roth
Author Message
Hi,

I had to do a repair of my archive, which lives on a NAS. While initially, all went with a performance I’d expect for my big archive, I notice that the final „Reindexing Layers“ step seems to slow down more and more the more it nears the archive end. The attached screenshot shows the state after running for about 43 hours, and progress seems to have come almost to a halt compared to progress about 6 hours ago.

Is this expected due to the data structures that need to be read/updated in this stage, or could it be I am thrashing (my Mac has 18 GB of RAM, so I think QRecall should definitely use the maximum of 8 GB, as I can see from the description in the Advanced preferences window for this setting, which is set to „Actual physical memory“)? Is a 1.6 TB archive just too big?

Thanks,
Christian
Thanks so much for the info. I knew this function was too good to be true

Well, my accidentyl „shutdown while still backing up“ often happened because the monitor window (though set to show on all spaces) was obscured by other open windows. Now I just noticed (don’t know how long this is there already) that the „Q“ menubar item title receives a grey center dot when an activity is in progress. Since the menu bar is visible all the time, unless I forget to peek there, that should help stop me from accidentally shutting down my machine in that case.

A final question: Does a QRecall activity in progress get a chance by the OS to clean itself up before it gets killed by the shutdown process? Asking if it was possible to cleanly stop any activity in time (and closing the open archive correctly) before shutdown actually happens. I seem to remember you actually implemented it that way, but that either there’s a hard time limit (10 seconds? - which with a >1TB archive I will always miss) or the SMB protocol network underpinnings go away earlier, so the process cannot finish writing to the NAS). But I’m not sure about the details on this one.

As for my original question, I’ll simply try to forget this function ever was intended to exist. Being on 10.11.x with no further upgrade path (Mac Pro 2009), I won’t receive any OS fix (if that is what it needs) anyway.

Thanks
Christian
Hi,

I was undescribable delighted when I read this new feature in the 2.1 release notes:

If you attempt to log out, shutdown, or restart while (monitored) actions are running, the activity monitor will let you stop the actions, reschedule them to run later, or hold the shutdown until they finish.


I am backing up to an archive on a NAS, and regularly when shutting down my Mac or putting it to sleep while an archive operation is still in progress, the archive usually becomes corrupt and I need to repair it (taking about a whole day) before I can work with it again after the next boot. That option, I figured, would prevent this from happening in the future.

Alas, shortly after I installed the new version and inadvertently restarted my Mac some time later, a backup operation was still in progress. What then happened was that from the monitor window, a sheet opened that should let me make my choice before the restart/shutdown continued. However, at that time, (a) my mouse was inoperable, (b) as was my keyboard, so I couldn’t operate that dialog sheet, and Mac OS continued its shutdown procedure, closing all apps, blanking the desktop to black (with the sheet still showing from the monitor window, but inoperable), and (c) finally the Mac shutting down completely, taking the QRecall monitor window with it. Result of course was that the archive needed repairing afterwards.

I am running on Mac OS 10.11.6.

Are there any specific circumstances where the prevention of shutdown would not work? Does the Advanced preference „Actions Run as a Background Process“ have any influence on this feature? I think I had it set to „false“ already at that time.

Maybe best thing for me is to create a tiny test archive (where repairing is cheap) and try a shutdown again. James, in case what I described is not the way it should work, is there any special debugging I should turn on before further testing? Any specific Advanced Preference values to set? Any specific procedure to try to generate best possible logging info? I’ll gladly test and investigate this further, as it is such a welcome and vital feature for me: preventing archive corruption due to accidental Mac shutdown/sleep.

Thanks for any guidance,
Christian
Hello,

after upgrading to beta 58 (though I fear the beta may nothing have to do with this), yesterday my scheduled capture reported my archive needed a repair. However, trying to repair it failed with a "disk or network error". Repairing the volume the archive is on showed no problems, the structures seemed to be fine. I then examined the archive package and found all files seemed to be fine with reasonable sizes, except for the file "package.index", which had a file size of 0 bytes.

I assume that this file is vital (and leads to the above error because it is empty), and that without the info in that file, the archive contents cannot be recovered by any means. Am I correct?

Thanks, Christian
Hello,

I am running QRecall 1.2.0(55) on a headless server (Mac mini, onboard graphics), which I manage using the Screen Sharing app. Unfortunately, QRecall's new visuals seem not to be compatible with Screen Sharing: While the layers list displays correctly on the controlling remote Mac, the lower section in the window (showing the files in the layer/archive) shows blank except for the grey disclosure triangles.

I assume this is a consequence of QR using the fastest graphics routines available, hence bypassing some hooks in the OS that it uses to detect changes for sending over Screen Sharing? Is there anything I could do about it (some hidden pref in QR) to disable fast drawing, but enable compatibility with Screen Sharing?

Thanks for any hints,
Christian
Thanks James, just sent the Diagnostic Report your way and deleted the offending pref value. As soon as I had done that, the scheduler resumed operation of the pending actions.

If the issue shows up again, I hope I will be able to timely spot it so that maybe we can track everything that happened between now (the pref reset) and it happening again.

- Christian
Hello,

with QRecall version 1.2.0(55) beta (1.2.0.55), I see the Console log filling with messages of the form

29.12.11 15:45:26 QRecallScheduler[19600] *** Assertion failure in -[QuantumScheduler reiterateLoginState], /Users/james/Development/Projects/Quantum Recall/Scheduler/Source/QuantumScheduler.m:2401
29.12.11 15:45:26 com.apple.launchd.peruser.503[80] (com.qrecall.scheduler) Throttling respawn: Will start in 9 seconds
29.12.11 15:45:36 QRecallScheduler[19602] *** Assertion failure in -[QuantumScheduler reiterateLoginState], /Users/james/Development/Projects/Quantum Recall/Scheduler/Source/QuantumScheduler.m:2401
29.12.11 15:45:36 com.apple.launchd.peruser.503[80] (com.qrecall.scheduler) Throttling respawn: Will start in 9 seconds
29.12.11 15:45:46 QRecallScheduler[19608] *** Assertion failure in -[QuantumScheduler reiterateLoginState], /Users/james/Development/Projects/Quantum Recall/Scheduler/Source/QuantumScheduler.m:2401
29.12.11 15:45:46 com.apple.launchd.peruser.503[80] (com.qrecall.scheduler) Throttling respawn: Will start in 9 seconds

In QRecall's own log, I find the following (don't know if that's related):

Schedule 2011-12-29 15:49:46 Failure Unexpected problem; scheduler stopping immediately
Schedule 2011-12-29 15:49:46 sKnownLoginState not kLoginConditionLoggedIn or kLoginConditionLoggedOut
Schedule 2011-12-29 15:49:46 (debug) NSInternalInconsistencyException exception
Schedule 2011-12-29 15:49:46 (debug) backtrace
Schedule 2011-12-29 15:49:46 (debug) 0x00007fff85a80766: 0x00007fff85a806d0 __exceptionPreprocess (/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation@0x00007fff859cf000)
Schedule 2011-12-29 15:49:46 (debug) 0x00007fff8216df03: 0x00007fff8216ded6 objc_exception_throw (/usr/lib/libobjc.A.dylib@0x00007fff82164000)
Schedule 2011-12-29 15:49:46 (debug) 0x00007fff85a805a7: 0x00007fff85a80540 +[NSException raise:format:arguments:] (/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation@0x00007fff859cf000)
Schedule 2011-12-29 15:49:46 (debug) 0x00007fff8842d97a: 0x00007fff8842d8b4 -[NSAssertionHandler handleFailureInMethod:object:file:lineNumber:description:] (/System/Library/Frameworks/Foundation.framework/Versions/C/Foundation@0x00007fff8835f000)
Schedule 2011-12-29 15:49:46 (debug) 0x00000001000084b7: 0x0000000000000000 unknown (/Library/Application Support/QRecall/QRecallScheduler@0x0000000100000000)
Schedule 2011-12-29 15:49:46 (debug) 0x00000001000054a0: 0x0000000000000000 unknown (/Library/Application Support/QRecall/QRecallScheduler@0x0000000100000000)
Schedule 2011-12-29 15:49:46 (debug) 0x00007fff8838724c: 0x00007fff883870b8 __NSFireDelayedPerform (/System/Library/Frameworks/Foundation.framework/Versions/C/Foundation@0x00007fff8835f000)
Schedule 2011-12-29 15:49:46 (debug) 0x00007fff85a1cbb8: 0x00007fff85a1b260 __CFRunLoopRun (/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation@0x00007fff859cf000)
Schedule 2011-12-29 15:49:46 (debug) 0x00007fff85a1ad8f: 0x00007fff85a1ab50 CFRunLoopRunSpecific (/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation@0x00007fff859cf000)
Schedule 2011-12-29 15:49:46 (debug) 0x00007fff883aab74: 0x00007fff883aaa66 -[NSRunLoop(NSRunLoop) runMode:beforeDate:] (/System/Library/Frameworks/Foundation.framework/Versions/C/Foundation@0x00007fff8835f000)
Schedule 2011-12-29 15:49:46 (debug) 0x00000001000020c0: 0x0000000000000000 unknown (/Library/Application Support/QRecall/QRecallScheduler@0x0000000100000000)
Schedule 2011-12-29 15:49:46 (debug) 0x0000000100001689: 0x0000000000000000 unknown (/Library/Application Support/QRecall/QRecallScheduler@0x0000000100000000)
Schedule 2011-12-29 15:49:46 (debug) 0x0000000100001534: 0x0000000000000000 unknown (/Library/Application Support/QRecall/QRecallScheduler@0x0000000100000000)
Schedule 2011-12-29 15:49:46 (debug) 0x0000000000000001: (unknown image)

I am running on OS X 10.6.8 Build 10K549.

What could be the issue here? Is this intended during beta phase, or is there something wrong with my install?

- Christian
Hi,

is there a way to (globally, at least on one machine) put any scheduled operations on-hold? My use case is that I sometimes want to do a lengthy video-transfer from DV to the disk and do not want QRecall to start a backup during this time to make sure no frames are dropped. (QRecall sometimes locks out any disk access during its capture initialization phase for sometimes up to a minute on my machine, probably to capture the current state (FileSystem Events???) before actually starting the backup.)

I know I can disable any actions in the actions list, but I have already a list of actions some of which are disabled, some of which are not. If I now disable all actions, I need to remember which originally were already disabled so that I do not enable them again mistakenly after my video capture.

What I would like to see is a global checkbox (either in QRecall, as a QRecall pref pane of in the actions list window) that simply disables start of any scheduled actions as long as that checkbox is on. Once I turn it off, schedules should resume normal operation.

Is there already an easy way of doing this?

Thanks,
Christian
James Bucanek wrote:I'd suggest starting by setting QRFilePreallocateBugWorkaroundRule to 1.


I did that and as far as I can tell, this solved my problem. Thank you!

Also, thanks for updating the Advanced QRecall settings page, and the explanations are - as always - thorough and easy to understand.

A suggestion would be to include the contents of that page in the manual (i.e. application help file), as that was where I first looked. Or include a link to said page if you do not want this info to be available too easily

-Christian
Hi,

after updating to 1.1.4, I get the error that pre-allocation failed when trying to do a capture. Is this incidentally the case (I have 44 GB free on the target) or has this something to do with the "known issues" item about "Preallocation on Airport Extreme Volumes"? My target volume is a Netgear ReadyNAS NV+, and I think I was able to do captures even with less free space on that NAS with earlier volumes.

So my question is if the Preallocation issue might also apply to other manufacturers' NASes or if this is an issue specific to the Airport Extreme device?

The log says the following:



The pre-allocation size of "Length: 33554432" would only be 32 MB (if I did my math correctly), so with 44 GB available, that should not be a problem, should it?

Am I safe to apply the QRFilePreallocateDisable preference for a test if that remedies the issue or are there chances of ending up with a corrupt archive?

And a final note: The QRFilePreallocateDisable key does not yet seem to be listed here. Could you please add the description and the allowed key values there? Thanks!

-Christian
James Bucanek wrote:Think of sliding beads on a string; you can't cut the string and make it shorter until you move all the beads to one end, but you can move some beads, stop, and come back later to move more beads, until you're done.


Great analogy! Please, remember it and maybe add it to the user's guide

And well, yes, all that info is already there in the user's guide So that's another "RTFM!" for me...

- Christian
James Bucanek wrote:Just and idea...


...and maybe not the worst one. I'll think about that!

NOTE: This thread went a little beyond ping-pong, i.e. I saw some of your additional posts only after I anwered to an earlier one. This broke the logical continuity in some part - sorry!

- Christian
James Bucanek wrote:
  • Recompressing a terabyte of data over a slow network connection might take weeks. Fortunately, compacts are incremental and you stop them and restart them later.



  • This, I did not know - I thought the compact action was atomic and once force-cancelled would either leave the archive in inconsistent state (requiring a re-index or repair) or have not achieved anything. I once tried compacting the 700 GB archive on the slow NAS, and it took about 35 hours before it failed near the end - or at least I thought so -, probably due to some network problem.

    Being able to incrementally compact lets appear the whole thing in a different light, and then I am with you to capture fast (=no compression), and compact (medium to heavily) in an incremental fashion.

    So now I'm destined to give it a try.

    Thank you very much for the detailed insight on the matter (and the warnings...). This certainly helps me making more informed decisions and is highly appreciated!

    Kind regards,
    Christian
    James Bucanek wrote:Having said that about the performance of compression, you might consider changing your QRCaptureFreeSpaceSweep setting. This will cause the archive to grow more quickly when capturing, but does improve the performance of the capture. You'll want to schedule an occasional compact action to recover the free space now being ignored by the captures.


    I'm not convinced that this will be beneficial with large (ca. 700 GB) archives on a slow medium. My reasoning is that yes, initial capture is faster of items, but the later compacting, where many data need to be moved within the file, will require quite some bandwidth (reading and writing to the slow device). I once did a compact action on that large archive and it took (I think) about 30 hours. This means that for more than a day, I cannot do any backups to that archive. Being self-employed, most of the time there is no such thing what others call a "weekend" where those actions usually run.

    Anyway, I'll give the compression route a try now and see how it fares in my situation.

    Thanks again,
    Christian
    Thank you James for even doing benchmarks for answering my question.

    I think that with my application, I fall into the "Recapture" case, since probably more than 98% of the data remains the same between (re-)captures. Initial capture times aren't an issue for me (that's just a one-time op for any new machine added to the backup), it's recaptures that happen often. If I read your figures correctly, compression may yield some small performance gains there. Changing data here is quite compressible (Java sources, Pages documents, and XML files of various kinds). When I combine this with the possible reduction in size of the archive, I am now willing to give that strategy a try.

    If I can deduct some interesting figures from the logs before and after turning on (high) compression, I'll post a follow-up.

    Thanks again,
    Christian
     
    Forum Index » Profile for Christian Roth » Messages posted by Christian Roth
    Go to:   
    Powered by JForum 2.1.8 © JForum Team