QRecallDownloadIdentity KeysForumsSupport
  [Search] Search   [Recent Topics] Recent Topics   [Hottest Topics] Hottest Topics   [Groups] Back to home page 
Messages posted by: Gary K. Griffey
Forum Index » Profile for Gary K. Griffey » Messages posted by Gary K. Griffey
Author Message
That makes perfect sense.

Thanks again for taking the time to provide such detailed answers to my questions.

GKG

James,

That is interesting...it would appear that you are indeed correct. The QRecall Helper process is running at 15.5%-16.5% of my 6 core Xeon CPU...which means it is maxing out 1 core.

So, I take it that QRecall cannot spread its work out over multiple CPU cores?

Thanks,

GKG
James,

Thanks for the thorough explanation. I have a much better understanding of the options, the process, etc.

Your answer does raise one additional question. It would seem from your explanation that the vast majority of workload during a capture that is using the highest shifted quanta setting would be on the archive and its disk.

The capture that I described is targeting a large singe file located on an SMB share. The archive, however, is being housed on a locally attached ThunderBolt 2 disk array. This Thunderbolt 2 disk is connected to a 2013 Mac Pro…with a 6 core Xeon CPU, 32 GB of RAM, etc. (i.e., plenty of horsepower).

When I looked at Activity Monitor during the initial 60 hour capture test using the highest shifted quanta setting, the Thunderbolt 2 drive was reading at 20-25 MB/s…certainly far less than its performance capacity. I would think that this drive would be literally “screaming” as QRecall was being forced to search every block in the archive for a quanta match. This did not appear, however, to be the case. Any thoughts?

Thanks again for all the info and a great product.

GKG
Greetings James,

I have a question concerning shifted quanta detection.

In reading the details concerning the benefits/drawbacks of using shifted quanta detection in your help file, it would seem to me that during the initial capture of a file to a newly created archive, shifted quanta would not be relevant at all. It would seem, at least from my understanding, that shifted quanta detection would only be relevant to the conversation during subsequent captures of the same file.

This, however, does not appear to be the case. I created a new archive...and began to capture a single large virtual disk file that is being housed on a network share...this file is roughly 150 GB. At first, I set the shifted quanta in the archive to its maximum setting. After allowing the initial capture to run for nearly 60 hours...it was only 50% complete.

I cancelled the capture...created another new archive...and ran the same initial capture with shifted quanta turned off. The capture completed in just over an hour.

What am I missing? (A whole lot I'm sure...but I always learn a lot from your responses )

Thanks

GKG
Great...thanks for the info...

GKG
Greetings James,

Just an FYI...in the current beta release, I reset the preference Log File "Keep Last" count down to a value of 1. After doing this and restarting my system, the QRecall scheduler would constantly crash, every 10 seconds or so, with the error "Invalid Log File Keep Count"...

I set it back to 3...and it appears to be ok now. I guess I would suggest that if a value of 1 causes this issue, you may want to remove this choice from the selection value option for this specific preference. Currently, you can select a value of 1 through 9.

Thanks,

Gary K. Griffey
The only issue that i have observed is about as innocuous as it gets...the striped "barber pole" progress bar does not appear to actually "spin" in the Activity Monitor window.

Just cosmetic...but odd nonetheless.

I have found no other issues.

GKG
Ok...your explanation makes perfect sense...

I know that archives "remember" what you were browsing during the previous session...and....since the capture I performed today was from a different location entirely ...when I opened the archive, the browser was still "pointed" at a previous folder....and as such, today's layer should have been hidden....

Thanks as always...QRecall remains rock-solid and certainly one of my favorite applications!!!!!

GKG
James,

Thanks for the reply. When I toggled the switch...the layer from today was indeed visible...although dimmed as you mentioned.

My question is...why is this layer considered "unrelated to the items in the browser"?

The capture was made from a different location (network share) from where the items had been captured from before...does this explain it?

Thanks, again...

GKG
Greetings James,

I have been using QRecall for many months now without any issues. This morning, I ran a capture action to an existing archive. The capture ended ok...and stated that it had captured a small amount of data (185 KB). When I subsequently opened the archive, however, no layer was created for today's capture. The last layer shows 02/03/2013.

I have never seen this before. I have sent a report.

Thanks,

GKG
James,

Thanks for the update and the detailed info on your process.

I will give the rsync another try...it sounds like just what I need to leverage QRecall's abilities and maintain offsite archives...

Thanks again!

GKG
James,

Thanks for the reply.

Yes...from my previous testing...I realize that using the most aggressive shifted quanta setting for a virtual disk is not the best use of resources...my intent was to illustrate that I need the resulting archive to be as small as possible...even if it costs CPU, etc.

I will take a another look at rsync. I have tested with it before...but never had much luck getting it to perform block changes only...possibly, my options settings were incorrect.

In any event, thanks again for the reply.

GKG


Greetings James...

I have been rolling along using QRecall 1.2.1 with great success...no issues....thanks again for the great product....I use it every day...and rely on it constantly.

I wanted to run a new scenario by you...and seek your advice.

Currently, one of the things that I use QRecall for is to create archives of large VMware Fusion virtual disk drives. These virtual disk files are large binary files, some as large as 180 GB. QRecall does an excellent job of creating weekly incremental images of these large objects very efficiently. My desire, however, is to also maintain an offsite QRecall mirror archive.

This is how I envision it working.

1) A new QRecall archive is created at site "A" that includes one or more of these virtual disks. Even with the best compression and highest shifted quanta options...this archive could easily reach 100 GB's in size.
2) This archive is then copied to an external drive...that is physically relocated to site "B".


Now, the problem statement. When the archive at site "A" is subsequently updated with a recapture operation of the virtual disks...I need a way to "refresh" site B's copy of the archive...preferably via a network connection....just the delta data would be transmitted, of course...then the archive at site "B" would somehow be "patched", for lack of a better term, and thus be a mirror of site "A"'s archive.

I have used many diff/patch utilities in the past to mimic this functionality...but they were all geared toward single binary files...not a package file/database, as QRecall uses.

Any suggestions? I would just love to leverage QRecall's amazing de-duplication abilities to assist in this endeavor.

Thanks for your time...as always...

GKG





James,

I concur with your findings. Running QR 1.2 (1.2.0.83) on OSX 10.8 DP4 the issue that I reported on previous 10.8 builds is now gone.

So far...smooth sailing...

Thanks,

GKG
James,

You are correct...no surprise.

When the Capture action to the archive stored on the SMB mount point fails, the archive is left damaged...and must be repaired first. This was automatically happening before...since I always ran a Verify action after the Capture failed...and the Verify fixed the archive's data component.

My error, in this case, was attempting to reindex of the archive before manually repairing it...(or running a Verify)...and it, therefore, failed, as you stated because the data was damaged by the Capture.

I repeated my steps...after first copying a clean archive to the SMB share...

1) Running a Capture...which failed.
2) Repairing the now damaged archive on the SMB share
3) Deleting all the .index files in the package
4) Reindexing the archive...which did work fine.
5) Trying another Capture...which failed.


So...at this point, I believe it is still only the Capture that has the issue to the SMB share.
 
Forum Index » Profile for Gary K. Griffey » Messages posted by Gary K. Griffey
Go to:   
Powered by JForum 2.1.8 © JForum Team