QRecall Community Forum
  [Search] Search   [Recent Topics] Recent Topics   [Hottest Topics] Hottest Topics   [Top Downloads] Top Downloads   [Groups] Back to home page 
[Register] Register /  [Login] Login 

Second capture (scheduled) takes loooong RSS feed
Forum Index » Beta Version
Author Message
Christian Roth


Joined: Jul 12, 2008
Messages: 26
Offline
Hello,

this is for QR 1.1.0.29. I initially captured my users folder, then - via a scheduled action - started a subsequent capture of the same items plus a few more folders than in the initial capture.

While the initial capture took its time (about 9 hours), the second capture now takes over 15 hours and I am about half way through it. Also, there is constant disk access and head movements on the HD.

I checked in Intstruments and find a lot of very small (in size) accesses using PBWriteForkSync and PBReadForkSync. Should this be happening? I guess that the constant head movement slows down the capture significantly, but maybe it's ok. The items to capture and the destination of the archive lie on physically different disks, so I would assume that there's not much race between read and write positions on the same disk.

Here's an excerpt of the Instruments log:

#,,Caller,Function,FD,Path,Bytes
1,0,PBReadForkSync,pread,8,,12
2,0,PBWriteForkSync,pwrite,4,,4
3,0,PBWriteForkSync,pwrite,4,,25651
4,0,PBWriteForkSync,pwrite,4,,5
5,0,PBWriteForkSync,pwrite,4,,4
6,0,PBWriteForkSync,pwrite,4,,4
7,0,PBWriteForkSync,pwrite,4,,21388
8,0,PBWriteForkSync,pwrite,4,,4
9,0,PBWriteForkSync,pwrite,4,,4
10,0,PBWriteForkSync,pwrite,4,,4
11,0,PBWriteForkSync,pwrite,4,,21400
12,0,PBWriteForkSync,pwrite,4,,4
13,0,PBWriteForkSync,pwrite,4,,4
14,0,PBWriteForkSync,pwrite,4,,27794
15,0,PBWriteForkSync,pwrite,4,,6
16,0,PBWriteForkSync,pwrite,4,,4
17,0,PBWriteForkSync,pwrite,4,,4
18,0,PBWriteForkSync,pwrite,4,,24698
19,0,PBWriteForkSync,pwrite,4,,6
20,0,PBWriteForkSync,pwrite,4,,4
21,0,PBWriteForkSync,pwrite,4,,4
22,0,PBWriteForkSync,pwrite,4,,20331
23,0,PBWriteForkSync,pwrite,4,,5
24,0,PBWriteForkSync,pwrite,4,,4
25,0,PBWriteForkSync,pwrite,4,,4
26,0,PBWriteForkSync,pwrite,4,,24591
27,0,PBWriteForkSync,pwrite,4,,1
28,0,PBWriteForkSync,pwrite,4,,4
29,0,PBWriteForkSync,pwrite,4,,4
30,0,PBWriteForkSync,pwrite,4,,27521
31,0,PBWriteForkSync,pwrite,4,,7
32,0,PBWriteForkSync,pwrite,4,,4
33,0,PBWriteForkSync,pwrite,4,,4
34,0,PBWriteForkSync,pwrite,4,,27248
35,0,PBWriteForkSync,pwrite,4,,4
36,0,PBWriteForkSync,pwrite,4,,4
37,0,PBWriteForkSync,pwrite,4,,26176
38,0,PBWriteForkSync,pwrite,4,,4
39,0,PBWriteForkSync,pwrite,4,,4
40,0,PBWriteForkSync,pwrite,4,,28489
41,0,PBWriteForkSync,pwrite,4,,7
42,0,PBWriteForkSync,pwrite,4,,4
43,0,PBWriteForkSync,pwrite,4,,4
44,0,PBWriteForkSync,pwrite,4,,26592
45,0,PBWriteForkSync,pwrite,4,,4
46,0,PBWriteForkSync,pwrite,4,,4
47,0,PBWriteForkSync,pwrite,4,,27198
48,0,PBWriteForkSync,pwrite,4,,2
49,0,PBWriteForkSync,pwrite,4,,4
50,0,PBWriteForkSync,pwrite,4,,4
51,0,PBWriteForkSync,pwrite,4,,24705
52,0,PBWriteForkSync,pwrite,4,,7
53,0,PBWriteForkSync,pwrite,4,,4
54,0,PBWriteForkSync,pwrite,4,,4
55,0,PBWriteForkSync,pwrite,4,,24342
56,0,PBWriteForkSync,pwrite,4,,2
57,0,PBWriteForkSync,pwrite,4,,4
58,0,PBWriteForkSync,pwrite,4,,4
59,0,PBWriteForkSync,pwrite,4,,28801
60,0,PBWriteForkSync,pwrite,4,,7
61,0,PBWriteForkSync,pwrite,4,,4
62,0,PBWriteForkSync,pwrite,4,,4
63,0,PBWriteForkSync,pwrite,4,,28627
64,0,PBWriteForkSync,pwrite,4,,5
65,0,PBWriteForkSync,pwrite,4,,4
66,0,PBReadForkSync,pread,8,,12
67,0,PBReadForkSync,pread,8,,12
68,0,PBReadForkSync,pread,8,,12
69,0,PBReadForkSync,pread,8,,12
70,0,PBReadForkSync,pread,8,,12
71,0,PBReadForkSync,pread,8,,12
72,0,PBReadForkSync,pread,8,,12
73,0,PBReadForkSync,pread,8,,12
74,0,PBReadForkSync,pread,8,,12
75,0,PBReadForkSync,pread,8,,12
76,0,PBReadForkSync,pread,8,,12
77,0,PBReadForkSync,pread,8,,12
78,0,PBReadForkSync,pread,8,,12
79,0,PBReadForkSync,pread,8,,12
80,0,PBReadForkSync,pread,8,,12
81,0,PBReadForkSync,pread,8,,12
82,0,PBReadForkSync,pread,8,,12
83,0,PBReadForkSync,pread,8,,12
84,0,PBReadForkSync,pread,8,,12
85,0,PBReadForkSync,pread,8,,12
86,0,PBReadForkSync,pread,8,,12
87,0,PBReadForkSync,pread,8,,12
88,0,PBReadForkSync,pread,8,,12
89,0,PBReadForkSync,pread,8,,12
90,0,PBReadForkSync,pread,8,,12
91,0,PBReadForkSync,pread,8,,12
92,0,PBReadForkSync,pread,8,,12
93,0,PBReadForkSync,pread,8,,12
94,0,PBReadForkSync,pread,8,,12
95,0,PBReadForkSync,pread,8,,12
96,0,PBReadForkSync,pread,8,,12
97,0,PBReadForkSync,pread,8,,12
98,0,PBReadForkSync,pread,8,,12
99,0,PBReadForkSync,pread,8,,12
100,0,PBReadForkSync,pread,8,,12

As you see, there are many 12-byte-long reads and 4, 5 or 6 byte writes. Is this supposed to be handled by the OS' caches or would these accesses (since synced??) actually result in hard drive access?

Just wondering if that behaviour is expected and normal.

Regards,
Christian
James Bucanek


Joined: Feb 14, 2007
Messages: 1572
Offline
Christian Roth wrote:While the initial capture took its time (about 9 hours), the second capture now takes over 15 hours and I am about half way through it.
15 hours seems like a long time, but it's not unheard of. It depends mostly on how much new data is being added to the archive and the speed of the volume containing the archive.

Any capture that adds a lot of new data (i.e. tens of GBs) can spend a considerable amount of time organizing and reorganizing the archive database. This is usually encountered during the "closing archive" phase of the capture, but it can happen in the middle of the capture too. The result is that QRecall appears to be spinning its wheels (sometimes for hours) while it updates and sorts its index of quanta.

Other confounding factor: The speed and latency of the archive volume (particularly networked volumes and USB drives). How much RAM the system has (less than 1GB will cause QRecall to run much less efficiently). Running other applications at the same time (causes virtual memory swapping and reduces the amount of RAM available for buffering data).

I checked in Intstruments and find a lot of very small (in size) accesses using PBWriteForkSync and PBReadForkSync. Should this be happening?
Probably. My guess is that QRecall is capturing a lot of small (&lt32K) files to a relatively large (&gt100GB) archive and/or your system doesn't have a lot of RAM. (A semi-informed guess based on your I/O trace.)

I guess that the constant head movement slows down the capture significantly, but maybe it's ok.
Excessive head movement is generally bad, and really slows things down, but in a few cases is unavoidable. The real killer is when QRecall decides that it needs to resize its lookup tables. This can thrash the archive volume for hours while it copies really tiny bits of data from one index to another. Indexes are resized exponentially, so this will only occur a few times during the life of the archive; but it is especially likely to occur during the first few large captures.

As you see, there are many 12-byte-long reads and 4, 5 or 6 byte writes. Is this supposed to be handled by the OS' caches or would these accesses (since synced??) actually result in hard drive access?
When using Instruments or iosnoop, you're tracing the requests that come from the application, not the physical I/O to/from the drive. All of these small reads and writes are (typically) buffered by a RAM cache that tries to minimize the actual I/O to the volume. Of course, there are limits and the more RAM you have the more efficient this buffering will be.

Just wondering if that behaviour is expected and normal.
Unless QRecall gets stuck or stops with an error, I'm inclined to assume that it's doing the best it can with what its got. Let the capture finish -- send a diagnostics report afterwards if you like -- and then see what the performance of subsequent captures is like.

- QRecall Development -
[Email]
Christian Roth


Joined: Jul 12, 2008
Messages: 26
Offline
Thanks for the in-depth answer. I'm just following up with the figures you had to guess, and me finding what probably was going wrong (sorry!):

Mac System: MacPro 2x2.8 GHz Quad-Core Intel Xeon, 6 GB RAM

Source disk: Apple RAID 0 internal, built from 2x internal 320 GB Western Digital S-ATA disks.
Rough size of data to backup: 350 GB

Destination disk: 500GB Samsung S-ATA disk, internal
Current archive size on that disk (*.quanta): 176.15 GB

The interesting thing was that the .quanta file did not seem to have changed in size for several hours now (nor did the last changed date), although QRecallHelper still worked heavily as described. I then looked into the trash because I deleted the first try of a capture. There I found, that the scheduled action obviously follwed the trashed .quanta bundle, and that item was it that got updated!

So it looks like the scheduled action still held some alias to the trashed .quanta file and did not use the (new) archive of the same name in the same location when it got to run.

I have since stopped the capture process, emptied the trash, re-selected the .quanta file to capture to in my scheduled actions, and will now try again.

I'll report back if that changes anything.
Christian Roth


Joined: Jul 12, 2008
Messages: 26
Offline
Ok - with the new archive (not the one in the trash), the second capture (captured 23.9 GB, written 14 GB, duplicate 4.53 GB, 150,910 files) took 45 minutes. Much, much better (if not to say, great!).

So it must have been something with writing to an archive in the trash, and maybe that one was not in pristine condition as I remember having cancelled the first capture at some point for that trashed archive.

My apologies for the noise, all seems to be well and fast again.

Regards,
Christian
James Bucanek


Joined: Feb 14, 2007
Messages: 1572
Offline
Christian,

I'm glad to hear that things are working satisfactorily again. I'm mildly curious as to why the other capture was taking so long; maybe the log file contains a clue.

The issue of QRecall continuing to use an archive that's been dragged into the Trash isn't new. It's caught more than one user by surprise. I'm not entirely sure what the solution is, but I've made a note to try and deal with that situation in a future version.

- QRecall Development -
[Email]
 
Forum Index » Beta Version
Go to:   
Mobile view
Powered by JForum 2.8.2 © 2022 JForum Team • Maintained by Andowson Chang and Ulf Dittmer