QRecall Community Forum
  [Search] Search   [Recent Topics] Recent Topics   [Hottest Topics] Hottest Topics   [Top Downloads] Top Downloads   [Groups] Back to home page 
[Register] Register /  [Login] Login 

[1.1.0.37] Reindex stalls at "Reading layers" stage RSS feed
Forum Index » Beta Version
Author Message
Christian Roth


Joined: Jul 12, 2008
Messages: 26
Offline
Hello,

I am seeing the issue that after what looks like a mostly finished Reindex of my archive, the process stalls in the "Reading layers" stage. The QRecallHelper application is still actively doing something, and I checked using Instruments that it is alternatingly reading and writing chunks of 12 bytes in size to a single file.

Using modification date info from the Finder in the archive package I'm rebuilding, the file being modified is only the hash.index file. The modifications done to it do not alter the file size as far as I can see. The last file modified before that one is "package.index", but this was three hours ago.

In the progress dialog (image attached), the display definitely has not changed for the last hour, i.e. no new layers have been read.

Is there any actual progress even being made? What is getting read (and written back?) to hash.index at this stage, and why is it taking so long?

-Christian
  • [Thumb - QRecall-1.jpg]
 Filename QRecall-1.jpg [Disk] Download
 Description Progress Sheet
 Filesize 12 Kbytes
 Downloaded:  826 time(s)

James Bucanek


Joined: Feb 14, 2007
Messages: 1572
Offline
Christian Roth wrote:I am seeing the issue that after what looks like a mostly finished Reindex of my archive, the process stalls in the "Reading layers" stage. The QRecallHelper application is still actively doing something, and I checked using Instruments that it is alternatingly reading and writing chunks of 12 bytes in size to a single file.
QRecall is updating the quanta hash. The "Reading layers" message is an artifact. It just happened to be the last progress message that got put up before the reindex finishes. Just before the reindex command begins the process of closing the archive it cleans up the quanta index (mostly to get back memory), which is where it appears to be stuck. I'll make a note to insert an additional status message in there to so it's more obvious what's really happening.

If QRecallHelper is reading and writing 12 byte records, then it's probably doing what it's supposed to be doing, which is writing all cached hash records to the hash.index file. The hash.index file never changes size. It's the largest data structure in a QRecall archive and consists mostly of a huge array (i.e. tens of millions) of 12 byte records. It allows QRecall to quickly find any quanta in the database.

Normally, flushing the records to the hash doesn't take more than a few minutes. However, it can be influenced greatly by the CPU, amount of RAM, archive access speed, competing processes, etc. I have a MacMini that captures to a 1TB archive that can get stuck updating its quanta index for hours (in fact, it's upstairs doing that right now).

Is there any actual progress even being made? What is getting read (and written back?) to hash.index at this stage,
Given that you're seeing the QRecallHelper process continue to read and write 12 byte records, I'm pretty confident it isn't stuck. However, I've been wrong before.

and why is it taking so long?
That's a complex question with a lot of variables. In my experience, one of the biggest factors is speed of access to the archive. If the archive is on a networked volume or USB connection, the overhead of reading and writing lots of tiny records can be high which can dramatically slow the process of updating the hash.

Hopefully by the time you read this QRecall has finished and moved on. In the unlikely case that it really is "stuck," take a sample of the QRecallHelper process and send it to me along with a diagnostic report (Help > Send Report...). You can obtain the process sample in Activity Monitor by locating the running QRecallHelper process and clicking on Sample Process, then saving the results to a text file. If you're a command-line fan, you can use the 'sample' tool to do the same thing.

One last question. What prompted you to reindex the archive in the first place?

- QRecall Development -
[Email]
Christian Roth


Joined: Jul 12, 2008
Messages: 26
Offline
Unfortunately, I had to stop QRecall the hard way due to my Mac requiring a re-boot

I'm into re-indexing again and I will let it continue unetil it finishes. The archive indeed is on a network volume, so I guess it really is the overhead of communication that makes it slow in my case. Is there a way to optimize that in some way to read and write larger chunks? I fear not in that the access offsets will probably be random in nature, and caching the whole file in memory will not be a solution (though technically possible in my case since I have enough internal RAM to hold the complete file). Do you know in advance what percentage of the file needs to be rewritten, so one could estimate if reading into memory, modifying, writing back as a whole may be faster than scattered individual file accesses?

The archive probably got corrupt either because a user in the family shut down its Mac while a capture was in progress or another user in the family (now, that's me...) fiddled with the network settings of the NAS the archive lives on while a capture was in progress.

I'll see if I can wait long enough for the hash.index update to finish or if it will be faster to fetch the archive from the networked volume to local disk, indexing there, then moving it back to the NAS.

Thanks, Christian
James Bucanek


Joined: Feb 14, 2007
Messages: 1572
Offline
Christian Roth wrote:Is there a way to optimize that in some way to read and write larger chunks? I fear not in that the access offsets will probably be random in nature, and caching the whole file in memory will not be a solution (though technically possible in my case since I have enough internal RAM to hold the complete file).
Until I update QRecall to run in 64-bit mode, caching the hash.index isn't an option (it's an address space issue, more than a physical RAM issue).

I've looked at several techniques for speeding up hash.index file access over the years, as it's one of the biggest performance bottlenecks in the system. The problem is trying to second guess the OS, which is already doing its own optimization. Local disk systems and network volumes all implement their own caching and read-ahead optimization. Some work extremely well with QRecall while others drag it into the mud. Implementing my own caching and read-ahead optimization may speed up the worst cases, but would probably slow down the best ones.

Do you know in advance what percentage of the file needs to be rewritten, so one could estimate if reading into memory, modifying, writing back as a whole may be faster than scattered individual file accesses?
That's a good question, and is one technique that I plan to revisit again in the future. Speeding up the quanta and names index are high on my list of optimizations.

The archive probably got corrupt either because a user in the family shut down its Mac while a capture was in progress or another user in the family (now, that's me...) fiddled with the network settings of the NAS the archive lives on while a capture was in progress.
99% of the time, shutting down a system before it can complete a capture should not cause any problems. The next action should auto-repair the archive and continue normally. On the other hand, I can't predict what effect "fiddling" with the network settings will have.

I'll see if I can wait long enough for the hash.index update to finish or if it will be faster to fetch the archive from the networked volume to local disk, indexing there, then moving it back to the NAS.
I suspect that just letting the reindex run its course will be pretty close to the optimal speed. If you feed adventurous and have enough local disk space, you could copy the archive from the NAS to a local drive, reindex it, then copy back just the repaired index files back into the original repository package. That works because the Reindex command does not alter the primary repository.data file, although you'll have to be careful that nothing tries to update the original archive while you're doing this. That might be faster -- can't say for sure because it involves a lot of additional copying.

- QRecall Development -
[Email]
 
Forum Index » Beta Version
Go to:   
Mobile view
Powered by JForum 2.8.2 © 2022 JForum Team • Maintained by Andowson Chang and Ulf Dittmer