QRecall Community Forum
  [Search] Search   [Recent Topics] Recent Topics   [Hottest Topics] Hottest Topics   [Top Downloads] Top Downloads   [Groups] Back to home page 
[Register] Register /  [Login] Login 

Interface complexity and infojunk RSS feed
Forum Index » Beta Version
Author Message
Neil Lee


Joined: Nov 16, 2010
Messages: 9
Offline
I've been using Qrecall now for a while and feel like I can finally offer some kind of informed opinion on the UI. I totally recognize that the problem this UI is trying to solve is exceedingly difficult, but I will be blunt: I recognize the goal is to create a "Time Machine Pro", but the user experience and flow within the UI still exceedingly complex and, especially in the case of the archive view, cluttered with too much, well, infojunk.

An example - see this screenshot of my archive. I know you can turn this off, but what exactly does this communicate?

All of those lines are meaningless, at least to the untrained eye. I see the idea behind what they're supposed to signify, but at this scale with these many connections, they doesn't actually add anything meaningful to the experience - it's infojunk.

Finding a file within the archive is similarly overcomplex: the search field seems to imply I can search my archive, but searches there do nothing. I think the problem is either my archive is too big, or maybe I'm not constraining the time span for the search properly? It's not clear.

It's possible I completely misunderstand how the archive view is supposed to work, but I guess that's specifically my point - the use is opaque to the user. Constraining the time span of the view is also confusing, specifically because the "handles" that you drag to narrow down the time span are out of view. You have to scroll to find them, and if I didn't know they existed, I'd never find them.

Overall I'm happy with the mechanics of how Qurecall works - it's restored a lost Users folder and restored a full install perfectly, albeit very, very slowly -- the Users folder restore took 2 days, for example[!]. I can see how its backup approach is superior technically to Time Machine, but the UI is so dense and unclear it's hard to not want to switch back to Time Machine purely for efficiency's sake: backing up is a lot faster w/ Qrecall, but figuring out where a particular file is, and restoring files is so slow those benefits are a wash compared to Time Machine.

I hate criticizing a UI without offering some suggestions, and I'm happy to collate a list of stuff I've noticed. My basic suggestion is that the default UI should be optimized for the primary use case for each task.

- Chances are if you open an archive it's to find or restore files. This is difficult with the current UI
- If you need to constrain the time span, it shouldn't require physical motion (scrolling and dragging) - give me a date picker or something more effective
- Search should provide more feedback - when you perform a search it looks like nothing is happening - which from my tests can sometimes be true?
- The view when you have multiple backups in a single archive is confusing. In the screenshot above, it's not clear why I have two instances of my system, and the presence of "Unknown" is unnerving. Where did that come from? What does that mean?

These thoughts are still pretty scattered, but I wanted to at least get the gist of them out. I'm happy to spend a bit more time diagramming how I think the userflows should work but wanted to see what everyone's thoughts were.
James Bucanek


Joined: Feb 14, 2007
Messages: 1568
Offline
Hello Neil,

Good timing. I just spent over an hour iChatting with another user about QRecall's UI and how it might be made more accessible.

Neil Lee wrote:An example - see this screenshot of my archive. I know you can turn this off, but what exactly does this communicate?

The timelines impart a lot of information (which is, in itself, part of the problem). A timeline shows you, for a given item, how many different versions of that item exist in the archive and when they were captured. Now that's pretty important information, if you want to know how many versions of file you have and when they were captured.

All of those lines are meaningless, at least to the untrained eye. I see the idea behind what they're supposed to signify, but at this scale with these many connections, they doesn't actually add anything meaningful to the experience - it's infojunk.

I have to disagree that it's "infojunk," as you say. I do agree that there's a problem if you don't know what the interface is trying to communicate. Basically, I'm trying to display all of the details of a third dimension (different versions captured over time) in a two dimensional interface. No other backup system that I know of tries to do this. Time machine, for all its snazzy UI goodness, doesn't even try. It merely shows you the items as they existed at a particular time, but won't give you the history of any individual item.

It's doubly compounded by the fact that you're in column view. When there was only list view, the timelines were manageable. But in column view, you can put considerably more items on the screen at once. Each timeline imparts a lot of information, and a lot of timelines start to overwhelm the interface.

But, as you pointed out, there's always the option of turning off the timelines if you not interested in the individual history of every item.

Finding a file within the archive is similarly overcomplex: the search field seems to imply I can search my archive, but searches there do nothing.

Ah, that's because search is currently unimplemented. See the Known Issues section of the QRecall 1.2.0(35) beta release notes.

It's possible I completely misunderstand how the archive view is supposed to work, but I guess that's specifically my point - the use is opaque to the user.

That's a very valid point, and something I've struggled with since day one. The basic concept of multiple versions of items over time is really hard to convey in an interface. That's why I came up with the Time View, which is as close as I've been able to come to graphically presenting the actual structure of the archive.

Constraining the time span of the view is also confusing, specifically because the "handles" that you drag to narrow down the time span are out of view. You have to scroll to find them, and if I didn't know they existed, I'd never find them.

That's a good point, and something I want to address.

Overall I'm happy with the mechanics of how Qurecall works - it's restored a lost Users folder and restored a full install perfectly, albeit very, very slowly -- the Users folder restore took 2 days, for example[!].

That's very strange. Recalling in almost always faster than capturing. I'd be very curious to know why your recalls are taking so long.

I hate criticizing a UI without offering some suggestions, and I'm happy to collate a list of stuff I've noticed.

I don't mind criticism, and I don't expect my users to design the UI.

My basic suggestion is that the default UI should be optimized for the primary use case for each task.

I agree.

- Chances are if you open an archive it's to find or restore files. This is difficult with the current UI
- If you need to constrain the time span, it shouldn't require physical motion (scrolling and dragging) - give me a date picker or something more effective

I agree that finding the shader handles in the current UI is a bit awkward. But more to the point, the most typical task is to identify an item to recall and then rewind the archive to a specific version of that item. Previous versions of QRecall had a set of "VCR" buttons that would allow you to move backwards and forwards in time, but stopping only at specific versions of that particular file. The current rework of the UI has lost this feature and I'm working on something to replace it.

- Search should provide more feedback - when you perform a search it looks like nothing is happening - which from my tests can sometimes be true?

Well, when the search is working again you can tell me if the feedback is sufficient.

- The view when you have multiple backups in a single archive is confusing. In the screenshot above, it's not clear why I have two instances of my system, and the presence of "Unknown" is unnerving. Where did that come from? What does that mean?

You have captured to this archive using multiple identities (identity keys). Each identity key you use creates a unique owner that keeps everything belonging to that owner separate from all of the items belonging to other owners. This allows you to safely store the backups of two computer systems in the same archive; nothing will get confused, even if the hard drive and every file name is the same.

You have an "Unknown" owner because (at some point) you repaired the archive and the QRecall recovered files but couldn't determine which owner they belonged to; these recovered files are assigned to a special "Unknown" owner.

If one of these owners is now really old/obsolete, or you want to get rid of the files belonging to "Unknown", you can select one of them and use the Archive > Delete Item... command. This will delete all of the items that belong to that owner from the archive.

These thoughts are still pretty scattered, but I wanted to at least get the gist of them out. I'm happy to spend a bit more time diagramming how I think the userflows should work but wanted to see what everyone's thoughts were.

I really appreciate your thoughts and the feedback. I'm acutely aware of some of QRecall's UI deficiencies, and I'm determined to correct them in this release.

- QRecall Development -
[Email]
Neil Lee


Joined: Nov 16, 2010
Messages: 9
Offline
Thanks for the thorough reply! Just another reason why Qrecall is great (for the most part. ). It's great to hear that these issues are being thought about and addressed!

Regarding the slow restore times - I'm not exactly sure why they're so slow, but I can provide at least a little context to my setup that might help.

- Mac OS X 10.6.7
- 2010 MBP backing up via gigabit ethernet to a Time Capsule
- Current archive is appr. 510G

I've been having problems (related?) with backups getting interrupted for some reason - the Time Capsule keeps getting unmounted from my system and I'm still trying to track down why. But to give you an idea of the slowness Im seeing, I'm currently trying to repair that archive and here's where it is after 25 hours:

image

I think it actually might be stuck here as I haven't seen any of the numbers go up in over an hour. I know this is a huge archive, but is that normal?

Neil

Update: Yup, it's stuck - it hasn't changed in the entire time since I posted earlier. Any suggestions?
James Bucanek


Joined: Feb 14, 2007
Messages: 1568
Offline
Neil,

338GB over 25 hours (yikes!) is about 3.8 MB/second. This is miserable throughput for a gigabit ethernet connection. It is, however, pretty close to the transfer speed that you'd get using AirPort.

Are you absolutely certain that you don't also have AirPort enabled on your MacBook Pro? With both an AirPort and an ethernet connection active, it's not that hard for the system to connect to the file server over AirPort. It will then stick with that interface until the volume is unmounted.

If you're certain AirPort isn't the issue, then the problem could be with the Time Capsule itself. There is no reason why a Time Capsule can't read or write to a volume at nearly 10x the speed you're seeing.

Some other random thoughts:

The fact that the QRecall monitor numbers are not changing doesn't (conclusively) mean that QRecall is completely stuck. It's possible that it's reading a very large empty region of the archive, which at your throughput speeds could take a while. Activity monitor is the easiest way to see if the QRecallHelper process is still using CPU time and if there's steady activity over the network.

I would suggest (a) repairing the volume and then (b) repairing the QRecall archive. If the Time Capsule volume is internal see Repairing your backup disk. If it's an external volume, disconnect the drive from the Time Capsule and connect it directly to the MacBook Pro. Repairing will be vastly faster and more reliable with a direct connection.

Finally, if the MacBook Pro is losing connection with the Time Capsule and the volume is unmounting during a QRecall action, this is a "very bad thing" (from QRecall's perspective). It also makes me think this is an AirPort problem; wired ethernet connections are typically pretty solid. Disconnects will play havoc with the data integrity, and could lead to corruption of the volume structure, which opens another can of worms.

- QRecall Development -
[Email]
Neil Lee


Joined: Nov 16, 2010
Messages: 9
Offline
Thanks for the speedy reply.

It's definitely going over the wire - I have airport turned off on my machine and as well just did a test transfer with a large file (a 20G VM image) which transferred a LOT faster.

I checked out that Apple suport doc. As far I as can see, it deals with repairing Time Machine sparseimages and not the actual disk itself - it's not possible to drag the entire TC volume itself into the Disk Utility window.

If it was easier to pull the hard drive out of the TC itself, I'd try that, but the fact that I can get pretty decent transfer speeds using other files make me wonder exactly where the issue lies.

Neil
James Bucanek


Joined: Feb 14, 2007
Messages: 1568
Offline
Neil,

Another random thought: I noticed from your screen shot that you have a lot of layers in your archive. And I mean a lot of layers. Even some of my biggest, oldest, archives only approach 300 layers, and your archive has almost four times that.

I did some earlier stress testing with QRecall with archives up to 1,000 layers, but have largely used much more shallow archives in performance and regression testing.

The number of layers in an archive adversely affects its performance, and the number of layers in your archive could be causing some serious performance degradation. For example, reading the file records for a particular folder requires a lot of short reads from the archive. For external hard drive this is really fast, but for many network volumes (i.e. a Time Capsule), a lot of short reads over the network can be really slow. Now for a handful of files in a dozen or so layers, the added time might not be noticeable. But for 1,000 layers this could add up to huge delays.

So I have two questions.

First, when you verify the archive is the data transfer rate what you would expect? I ask because verify uses a special DMA mode that reads data as fast as the source can supply it, regardless of individual record size. So if verify is fast, but recalling and browsing are really slow, then it might be the number of layers that's the problem here.

Secondly, do you really need all of those layers? The idea behind the rolling merge is that you keep fine-grained changes for the past week or two, but then merge those in to more compact and efficient deltas as time goes on, first into daily layers, then weekly layers, and finally only keeping one layer per month.

- QRecall Development -
[Email]
Neil Lee


Joined: Nov 16, 2010
Messages: 9
Offline
To be honest, I tried to but didn't fully understand what the differences were between merge and rolling merge and what exactly they did, so I avoided creating an action using one of those to be completely safe. Merging gives the impression that stuff might be thrown away, and I didn't want to mess with something I didn't understand. (I know I should have RTFM.)

I don't currently have access to that archive at the moment, so I can't do speed tests, but once I do I'll give that (and running a merge) a try to see what happens.
James Bucanek


Joined: Feb 14, 2007
Messages: 1568
Offline
Neil,

Here's the short lesson:

Merging does, indeed, throw stuff away: a merge keeps the new stuff and throws any old stuff away. But that's a good thing over time, otherwise your archive grows forever (or gets unmanageably complex).

Ignore the plain merge action, that's for geeks.

A rolling merge action works on the idea that you want to keep recent fine-grained changes (the file you changed Tuesday morning vs. the version you saved Monday afternoon), but don't care about fine-grained deltas that are months old (two report documents saved a day apart, some six months ago).

The rolling merge actions lets you choose blocks of time relative to today. These are, in order, a number of days to keep only the last version of each item that day, then a number of weeks to keep only the last version of each item that week, and so on with a number of fortnights, months, and finally years.

Here's an example. If you choose to keep 7 day layers, 8 week layers, and 12 month layers, here's what happens when the rolling merge runs:
- All layers captured today are left alone. (This is the minimum "ignore" period described later.)
- The layers for the previous 7 days are organized into 7 groups. All of the layers within a single day are merged into one layer, keeping only the last version of each item captured that day.
- The layers for the previous 8 weeks are then organized into 8 groups. All of the layers within each week are merged into one layer, keeping only the last version of each item that week.
- Finally, the layers for the previous 12 months are organized into 12 groups ... and you get the idea.

You can choose whatever time periods you want, and make them as small or big as you like. The action is very flexible.

There's a special "ignore" time period that extends the period of time that layers are not merged. So if you want to keep every hourly merge for the past two weeks, set the ignore setting to 14 days. The rolling merge wil then start grouping layers starting 15 days ago.

Your first merge will take some time, because you've got a lot of layers to merge. But after that the rolling merges run pretty quickly as there are usually only a few layers to merge most days. I typically schedule my rolling merge to run once a week, followed by a compact action. The compact action will recover disk space freed by the merge.

- QRecall Development -
[Email]
 
Forum Index » Beta Version
Go to:   
Mobile view
Powered by JForum 2.8.2 © 2022 JForum Team • Maintained by Andowson Chang and Ulf Dittmer