Message |
|
As Manfred mentioned, we have a version of QRecall that works with Sierra. I'm pretty happy with the Sierra compatibility at this point. We've preformed several full-volume OS X restores and so on with no issues. There were just a couple of other bugs and minor improvements that we'd like to get included in 2.0.4, and that's the only reason it hasn't been released yet. But we're ready to pull the trigger on 2.0.4 at anytime.
|
|
|
Hello Paul, The next beta for QRecall is still several months off, but here's what's going on. I've spent the better part of this summer replacing the aging mach ports inter-process communications (IPC) infrastructure with XPC, Apple's modern IPC service. I'm also rewriting the whole of QRecall to use automatic reference counting (i.e. modern memory management). These change are the last of the major rewrites to bring QRecall up-to-date with the latest core technologies and should provide stability and improved performance. I hope to have this all finished and tested by the end of September. After that, there's a short list of new features:
A UI facelift. This is a work-in-progress, so how radical this will be has yet to be determined.
A new "Stacks" feature, inspired by work done by Bryan Derman, that lets you incrementally offload a copy of your archive to remote storage. Initially this will be targeted at optical media, but I hope to evolve it into a multi-fasciated feature that supports a variety of file transfer protocols (SFTP, WebDAV, ...) as well as cloud storage services like iCloud, DropBox, Amazon S3, BackBlaze, and so on.
I want to implement a long-overdue feature that allows you to run a script before an action starts and/or after it finishes. A script could mount a specialty filesystem, wake up a server, export a database, suspend a daemon, and so on.
|
|
|
Kurt Liebezeit wrote:So, I'm stuck... I thought I pasted my valid identity key in the place where it goes, and the Preferences say it is valid and permanent, but it isn't working. What have I done wrong?
Kurt, You've done nothing wrong. This is a (*cough*) quirk (*cough*) of OS X's preferences system, which has been covered in other threads. Simple solution: restart your system and the capture should run OK.
|
|
|
Ralph Strauch wrote:Today I decided to try wifi again and again got the error, so I followed up immediately with a backup over firewire. I then noticed that the earlier wifi backup had apparently completed, and added a layer to the archive even as it was complaining about a "problem closing file." (I've also noticed this occasionally in the past.)
There are two places the "problem closing file" can occur. First, if there's a permanent failure of some kind (say the network or drive gets disconnected) and QRecall can't close its open files while trying to clean up and terminate. The other, which is what happened here, can occur when everything goes the way it should, but just as QRecall is finishing up and closing the completed files, the OS complains that something went wrong. This is what appears to have happened in this case. All of the data was successfully written, but when QRecall went to close the very last file, the network hung for 8 minutes and then reported that the file couldn't be closed (POSIX error 60, "timeout"). But in reality, all of the data was probably written, which is why the archive was intact and you had a complete layer. I don't have any good theories as to why this might happen. It's possible that the server simply had a lot of unwritten data to write, and the network operation timed out before the server had finished writing all of its buffered data. But that would require either tens of GB of unwritten data on a really, really, slow drive, or else the server was frightfully busy doing other things at the same time.
I've been using qrecall since 2007 and this problem only cropped up with v2, so I'm guessing that something in v2 changed what happens with the wifi connection when the target computer is asleep.
This is much more likely due to the change in filesystem API. QRecall 1.x uses the legacy Carbon API while QRecall 2.x uses the BSD (UNIX) API. Each API has its own idiosyncrasies and error handling, so there are bound to be some behavioral differences between the two.
|
|
|
Paul Mann wrote:Can I get QRecall to issue a shell command where I could add the WOL request via a command line tool perhaps?
That feature is planned for a future version.
|
|
|
Paul Mann wrote:Is this possible to set a condition of sending a magic packet to a mac address, then wait, then look for the destination archive?
Not in this version. QRecall really doesn't work at the server level. It works at the filesystem level. If your server supports being woken up when a client connects to a volume or in response to a bookmark mount request, this it should "just work." Since most dedicated file servers run all of the time, I'm going to assume that your server is another Mac that's set to go to sleep periodically. The simplest solution for that is to use the Energy Saver control panel and program it wake up a minute or two before your regularly scheduled actions. The Energy Saver control panel only presents a UI for setting a single wake event, but OS X can actually manage an arbitrary number of scheduled power management requests. If you need more than one wake time you can do this the propeller-beanie way using the pmset command-line tool, or look for a third-party app the does the same thing.
|
|
|
Kurt Liebezeit wrote:I'm expecting that it will store a good-sized ~10GB chunk of new unique data into the archive as a result of the operating system change, but otherwise the applications and user data should just be deduplicated, right?
Correct.
If you can think of any subtleties that I might want to address before the migration, I would be grateful to hear them.
What's going to happen is that a new set of volumes will appear in your archive. QRecall will most certainly detect that the volumes you are now capturing are not the one you were capturing. Everything will be captured anew (with data de-duplication), but you'll still get a new layer with every file appearing to be a new version, with no history. Your old volumes are still in the archive, and will no longer be appended to. When browsing your files, you may have to navigate between volumes if the item you're looking for is before, or after, the transition. If this is acceptable, you don't have to do anything. If, on the other hand, you'd prefer to have a single history for all of your documents, then you'll want to combine the new volume with the old one. You're most likely to want this with your user files partition. Navigate to the "owner" level of the archive. Select both the new user partition volume and the old user partition volume, and then choose Archive > Combine Items.... The items in the old volume will be migrated to the new volume, and will appear in the history that precedes the new volume. The old volume will then be deleted.
|
|
|
Mike M wrote:Does that sound okay?
Sounds perfect! A few tidbits of advanced wisdom: - The "couple of directories" can be part of the same capture action. A capture action can capture any number of source items (files, folders, or volumes). - If the folders you're are capturing are broad (i.e. ~/Document & ~/Pictures) then an hourly schedule is the most efficient approach. But if your folder is very targeted (~/Documents/Projects/Hot) then consider the "When source items change" event schedule. QRecall will monitor that folder for changes and immediately start a capture only when it does. It's more timely than waiting for the hourly capture, and it can be more efficient because the action only runs when there's something to capture. You just don't want this kind of schedule on a folder that changes all the time (like your home folder, which change every few minutes).
|
|
|
Mike M wrote:I have another situation. I frequently download large data files into the Downloads directory, and within a week, delete them. I have no need to save them longer than that. But if they are captured, then they may stick around in the archive for a long time, depending on how frequently I merge layers. I wanted to see if there's a way to merge only portions of layers, say only the files in the Downloads directory. I don't see such a thing in the docs. Of course, I bet that such an action doesn't make logical sense--it might violate some consistency rule in the archive. Just guessing.
You're correct: it's not possible to selectively merge individual folders. It really violates the logical definition of a layer.
So that leaves the delete option. I see that I can manually delete some of these files. Is there a way to create a Delete action that would run on a schedule and delete stuff in Downloads?
It's possible to do this with the command-line tool. You could script the qrecall tool to periodically delete your Downloads folder from your archive, say once a week. But honestly, if you don't want to keep your downloads in the archive consider just excluding them from the capture. You can do this in the archive's settings or by setting the "Exclude Contents" capture preference on the Downloads folder. Files that you only want to keep a backup of for a few days are not worth capturing in the first place. Particularly since these are transient files which, presumably, are available to download again should you lose them for some reason.
|
|
|
Mike M wrote:I have a 2TB WD Passport drive. This is only a few months old. I recently purchased DriveDx in order to read the SMART data, which shows one condition of concern, which is that there are 3 bad sectors waiting to be remapped. DriveDx pops up a little message saying that a drive with even one bad section *waiting* to be remapped (I think the "waiting" concept if part of the issue) is 16 times more likely to fail within a year.
I can't speak to the "16 times more likely" prophecy; that seems high to me (because, as others have pointed out, all high density drives have bad sectors), but I'll assume DriveDx has the stats to back that up. Regardless, no drive is perfect?which is why we make backups in the first place. QRecall 2.0 has a data redundancy feature. When creating the archive, or from the archive settings, choose the level of data redundancy you want QRecall to add to the archive. The best settings (from a performance standpoint) are 1:16, 1:8, and 1:4. In your situation I'd recommend 1:16, or 1:8 if you're really cautious. At the 1:8 level, QRecall will write one redundancy byte for every 8 bytes of archive data, increasing the size of your archive by about 12.5%. The upside is that QRecall can then recover from the loss of one or two blocks of data within any span of 16 consecutive blocks. So QRecall can still read your archive, even if you lost 2 or 3 sectors on the drive. Be aware that data redundancy can't protect against wholesale loss of the entire drive, or even several whole tracks. But even in those dire circumstances, its recovery feature can extract all readable data to a second hard drive.
This 2 TB drive would be handy for holding two QRecall archives (one for the whole drive, one for my home directory),
There are really very few advantages to doing this. You'll use a lot more disk space and cause QRecall to work twice as hard. The most efficient arrangement is to store your entire startup volume (which will include your home folder) in one archive. If you want to capture your home or Documents folder to your archive more frequently than your system and applications, create two capture actions with different schedules, both capturing to the same archive. Think of archives like hard drive partitions: it's almost always more efficient and simpler to manage one big partition than a bunch of small ones.
|
|
|
Mike M wrote:I really liked QRecall during my trial period and I purchased a key.
Welcome aboard!
Thus far I have just been backing up my user directory, so I'm not backing up any apps or the OS or anything else. I know that the term "cloning" applies to making a copy of entire drives. But I am not sure that is what I want.
QRecall works quit differently than almost any other backup solution you'll fine. The term "cloning" does apply at all. QRecall always capture the changes?and just the changes?made to your files. Conceptually, it preserves every version of every item it captures. Its de-duplication logic means that those copies are stored in the least amount of space possible.
What I would like to do is back up all my files on a regular schedule. Sometimes I make use of backups to look at old versions in the case that I accidentally delete or modify a file. The other situation is a total failure of my main hard drive, in which case I would like to set up the entire drive or partition just the way it used to be. Is it possible to handle both of these use cases in a single type of backup operation?
QRecall can, absolutely, handle all of these cases. To capture your operating system, create a capture action that captures your entire startup volume. You can have just one big capture action, or split it up into multiple actions (say one for your home folder and another for your entire volume), so they can be scheduled to run at different times. If you only have capture actions, QRecall will preserve every version of every file you tell it to capture. But unless you have an unlimited amount of disk space on your archive volume, you'll eventually want to discard some of those older copies to make room for new ones. That's where the merge and compact actions come into play. These are all explained in the documentation, but the easiest way to get started is to use the Capture Assistant (under Help). Tell it you want to create a backup strategy and then answer the questions. When it's done, take a look at the actions the assistant creates (in the Actions window) and then begin tweaking and modifying those actions to get what you want. If you want to start over, just delete all of your actions and run the capture assistant again. As for restoring your entire system in case of a catastrophe, there are several sections of the help dedicated to this very subject. Short version: do yourself a big favor and create a minimal startup system (on an external hard disk?the one with your QRecall archive is perfect for this?or a USB drive) ahead of time. When disaster strikes, all you have to do is boot from your external volume, restore what you need, and be back in business within minutes. If you have any further questions post them here or write to support@qrecall.com.
|
|
|
Gary K. Griffey wrote:So, I take it that QRecall cannot spread its work out over multiple CPU cores?
QRecall is heavily multi-threaded. You'll see that some actions, like verify, can keep 6 or more CPU cores running at 100%. But the de-duplciation process is, inherently, a sequential process. Consider just two blocks. If you check to see if they are both unique simultaneously, they can both be considered unique (no similar blocks found in the archive). But those blocks could be duplicates of each other. Furthermore, if both blocks are unique, they have to be added to the cache before checking the next block. But two processes can't modify a structure like a cache simultaneously, and you can't use the cache until the add operation is complete. So while there are a few operations that could be performed in parallel, there are so many intermediate steps that have to occur sequentially, that it's actually a waste of time trying to turn it into a multi-tasking problem. Now if you happen to have encryption, compression, or data redundancy turned on, those processes are independent of the de-duplication logic and are all performed on separate threads.
|
|
|
Gary, Your I/O isn't maxed out, but I bet the CPU for the QRecallHelper process is running at 100% (or more). If you want performance, you have to avoid physical I/O as much as possible. I/O is slow. Memory and the CPU are fast. So QRecall uses massive in-memory maps, tables, indexes, and caches to first determine if a data block is unique or not, before any reading or writing of the archive begins. In fact, most (around 80%) of the memory used by QRecall during a capture are these lookup tables and caches. When your archive is relatively small, QRecall can determine if a block is unique 99.9% of the time, without having to read any data directly from the archive. This process is typically 200 to 300 times faster than the time it takes to write a single new block to the archive, which is why shifted-quanta detection can test 30,000 variations of a single block so quickly.
|
|
|
Gary K. Griffey wrote:In reading the details concerning the benefits/drawbacks of using shifted quanta detection in your help file, it would seem to me that during the initial capture of a file to a newly created archive, shifted quanta would not be relevant at all. It would seem, at least from my understanding, that shifted quanta detection would only be relevant to the conversation during subsequent captures of the same file.
It's true that it would be "most relevant" when applied to a previously captured file, but the mechanics of shift-quanta detection are applied to every block of data no matter what files have been captured. Specifically, data de-duplication is performed at the data block level; it has no knowledge of files. It simply compares each newly captured block of data against the corpus of previously captured blocks.
This, however, does not appear to be the case. I created a new archive...and began to capture a single large virtual disk file that is being housed on a network share...this file is roughly 150 GB. At first, I set the shifted quanta in the archive to its maximum setting. After allowing the initial capture to run for nearly 60 hours...it was only 50% complete.
60 hours? Wow, you are really committed to this experiment. Here, in a nutshell, is how QRecall performs data de-duplication. First, let's consider how de-duplication work when shifted-quanta detection is turned off: Files are divided into 32K blocks. When you capture a file, the first 32K of data (the bytes at offset 0 through 32,767) are read. QRecall then determines if there's an identical block of data already in the archive. If there is, then this is a duplicate block and it does not need to be added. If not, the new block is unique and is added to the archive. The next 32K of data (the bytes at offset 32,768 through 65,535) are read and the process repeats. The work to capture data increases exponentially as the size of the archive increases, but through various data tricks and optimizations, it goes pretty fast because each 32K block of data only needs a single up/down vote on whether it's unique or not. Now let's look at what happens when you turn shifted-quanta detection all the way to the maximum. The process starts out the same, the first 32K (bytes 0 through 32,767) are read and checked for uniqueness. If the block is a duplicate, then it is not added to the archive, and QRecall skips immediately to the next block. If the block is unique, however, the process changes. When shifted-quanta detection is on, QRecall then shifts 1 byte and considers the 32K block of data at offset 1 (bytes 1 through 32,76 . If that block also turns out to be unique, it shifts one more byte and sees if the block formed by bytes 2 through 32,769 is unique, and so on: Are bytes 0 through 32,767 contained in the archive? No? Are bytes 1 through 32,768 contained in the archive? No? Are bytes 2 through 32,769 contained in the archive? No? Are bytes 3 through 32,770 contained in the archive? No? (repeat 32,764 more times) If any of those blocks turn out to a be a duplicate, then QRecall found a "shifted duplicate". The fragment before the shifted duplicate is captured as a separate block and the whole thing repeats, starting with the next block immediately after the shifted duplicate block. However, it's also likely that all 32,768 tests will be false. If so, then QRecall knows that the block is unique and it doesn't overlap any shifted duplicate block. The block is added to the archive and the process starts over with the next block in the file. While shifted-quanta detection is pretty ruthless about finding duplicate data, you can also see that if the data is all unique the process takes 30,000 times longer[1] to come to that conclusion. That's why a 1 hour capture with no shifted-quanta detection takes 120 times longer when you max it out. That explains "no" shifted-quanta detection and "full" shifted-quanta detection. The settings in-between simply toggle between the two modes based on how effective they are. When shifted-quanta detection is set to "low" or "medium," the capture starts out performing full shifted-quanta detection. But after a few dozen blocks, if this work doesn't find any shifted duplicates, shifted-quanta detection is temporary turned off and QRecall switches to straight full-block de-duplication. This is controlled by a CPU budget. If shifted-quanta detection doesn't find any shifted duplicate blocks, the budget runs out and it stops performing shifted-quanta detection. If shifted-quanta detection does find shifted duplicates, the budget is increased, so it spends more time searching for duplicates. The basic idea is that shifted-quanta detection is most productive when there are large amounts of shifted, duplicate, data. So QRecall doesn't have to test every single block to discover one. If shifted-quanta detection is producing results, QRecall does it more. If it's turning out to be a huge waste of effort, it does it less. Let me know if that adequately explains things. [1] Full shifted-quanta detection doesn't, literally, take 30,000 times longer than de-duplicating a single block because of a few mathematical tricks and other optimizations. For example, QRecall uses so-called "rolling" checksums to create a "fingerprint" of the data in a block. The work required to calculate the first checksum, for bytes 0 through 32,767, requires about 64,000 shift and add operations. But the checksum for bytes 1 through 32,768 can then be calculated with just three more operations?essentially "removing" the value of the first byte and "adding" in the value of the next byte. Thus, the checksums for all of those intermediate blocks of data can be calculated very quickly, the total work not being that much more than calculating the checksum of a second block. Of course, that's just the checksum; each block still has to be tested against those blocks already captured, but it does save a lot of time.
|
|
|
Mike, Thanks for checking out QRecall. You can pause a running action, but that simply suspends the process. The archive files, and the volume that it's on, will remain open and connected.
recommended method If a capture has started and you need to disconnect the archive volume or shutdown/restart, simply stop the capture by clicking the (X) button in the monitor. Unless your archive is huge, the capture will wind up and close the archive within a few seconds. (Or just shutdown/restart. The normal OS X shutdown will send a stop signal to the capture action, it's the same thing.) All captures are incremental; the next capture will begin where the last one left off. This will result in an "incomplete" layer in your archive. This is just a caution that the capture which created that layer didn't get a chance to finish, so when recalling from that layer be aware that not all items were captured. The next capture will fill in the blanks. (If you want to be extra neat, merge the incomplete layer with the one that follows to create a complete layer?as if the incomplete capture had never happened.) Most long-running actions, notably capture and compact, are incremental. It's always safe to stop them when you need to and start them again later. There's even a shortcut in the monitor: right/control+click, or click-and-hold, the (X) stop button in the action's progress pane. The Stop and Reschedule command will stop the running action and immediately reschedule it to start again at a later time. A few actions, like merge and combine, are not incremental and can't be interrupted. But these actions are typically quick and generally don't take more than a few minutes to finish.
the not recommended method QRecall is designed to tolerate crashes, network failures, power outages, and dog/cat/ferret/foot induced volume disconnects. If you have a running capture, you could just pull the plug on your external volume and run. The next action will detect that the archive was left in an unfinished state and rewind it back to where it was, undoing all of the work of the interrupted action. This works about 99% of the time. In the rare conditions where QRecall can't automatically recover, you'll need to repair the archive before continuing. While this isn't the recommended method for stopping a capture, it should underscore that you don't have to treat archives with kid gloves. The archive structure is carefully designed to survive all kinds of disasters and data loss, which the repair command will recover from.
|
|
|
|
|