Share via


Grading On the Curve (Why the UI, Part 8)

This is the eighth part in my eight-part series of entries in which I outline some of the reasons we decided to pursue a new user interface for Office 2007.

Over the last two posts, I've discussed the Customer Experience Improvement Program and some of the data we've collected from the program.

How do we use that data to influence the design and organization of the Office 2007 user interface?

If you plot the command usage of the Office applications on a graph, you get a curve. A few commands account for a lot of clicks, and then slowly the number of clicks per command tapers off. We use the data represented by the curve to inform us about how often people use certain commands. The curve itself helps us visualize the usage pattern of the overall program and the average "depth" to which most people use the product.

Many people suggest that "you guys should optimize the UI to match the feature usage data." On the surface, this sounds like a solid idea; you could have a computer determine the organization and prominence of different features depending on what part of the curve they are in. It would be very scientific. The only problem? We've already designed that product, and it's called Office 2003.

Put another way: if all we want to do is design a product that matches today's pattern of feature usage--well, I don't have to do any work! Office 2003 already matches the curve exactly; we can't do any better than statistical perfection.

The real equation at work here is data + human = design. We need to take the data, analyze it, understand its shortcomings, and use it to inform a design which meets our goals. But, in itself, the data cannot produce a UI because it has no goals and is a reflection of the DNA of a product you already shipped!


Twice a day, I go in and swap out the tapes...

So, back to the initial question. How can we use the data to inform the Office 2007 design? There are two less obvious ways.

One thing we do is to look for desirable features that have low usage numbers. In general, this combination is a great opportunity for us to take advantage of work we've done in a previous version by helping people find useful features that they don't know are there. We can measure the "desirability" of a feature in several ways: a lot of direct customer requests, questions about the missing functionality on newsgroups and message boards, and sometimes just our gut feeling that people would like a feature if they only could find it. An example of this is the feature in Word that allows you to put a watermark behind a document. Lots of people ask how to do it, but can't figure out how. The prominent gallery of watermarks in Word 2007 has already caused many people to comment what a "great new feature that is."

Of course, there are things that can derail this process. Number one, a low-quality or poorly-designed feature will not succeed no matter how easy it is to find in the UI. Where possible, we've tried to "spruce up" old features in order to make them worth raising their prominence. Secondly, a bad name for a feature can turn people off from using it. Do we change the name, hoping that new people will discover it? Or do we keep it the same, knowing that it hurts discoverability but also that existing users of the feature won't be confused. It's a hard judgment call to make sometimes.

The second way we use the data is by looking for frequently-used features that are hard to get to today. Any time we see this, it represents people overcoming the user interface to use a buried feature because it's so important. A great example of this is "superscript" in Word. In Word 2003, it must be added to the toolbar manually through customization. Yet, even as a non-default toolbar button, it gets more clicks than 30% of the buttons on the Formatting toolbar. The opportunity here is to discover the things that people love and that even more people would use if they knew they could.

The point of both of these exercises is to reshape the feature usage curve. We want to see more people using a broader set of tools and saving time because of it.

Of course it's true that we also use the feature usage data to figure out which commands need to be ultra-efficient and which can be taken out of the product entirely, but that's really the less lofty part of our goal. Success for the Office 2007 UI means that we broaden the Office 2003 feature curve, not that we match it.

Comments

  • Anonymous
    April 11, 2006
    You bring up an interesting point here. What commands have been taken out of the product as a result of this testing?

  • Anonymous
    April 11, 2006
    Just wondering if you are able to determine using your statistical models if a user meant to use the commands they use.

    In a simple example if a user clicked Bold and then either turned it of or clicked undo then that may not have been the command they wanted.

    Obviously the Office 2007 UI is designed for results so users can get great looking documents faster, will your statistical model for Office 2007 collect data that will indicate how successful the new UI has been?

  • Anonymous
    April 11, 2006
    I have two questions:

    Has the 'Floatie' toolbar gotten a declassified name?

    And, could you possibly make it customizable? I know you want this new UI to not AutoCustomize, but I am pretty sure that I am never going to use three of those buttons, so could you make it a possibility to customize that, I mean I like the QAT, but I want something like that that will appear when I highlight text.

  • Anonymous
    April 18, 2006
    Exactly a smart floater that guesses based solely on my prior usage what commands i will use, but always retains a consistent order to the buttons (e.g. I often move really fast to select a tool and I don't want the buttons changing order since after a while there will probably only be five or so actions each time and these will only jocky for position statistically).  Also make all this editable so i can can hard code my top items easily.

  • Anonymous
    June 01, 2006
    Best of the text i read about a problem.

  • Anonymous
    October 14, 2006
    PingBack from http://pschmid.net/blog/2006/10/14/66

  • Anonymous
    September 26, 2007
    PingBack from http://www-etud.iro.umontreal.ca/~rivestfr/wordpress/2007/09/10/memo-sur-ms-office-2007/

  • Anonymous
    August 26, 2008
    PingBack from http://p1uton.ru/2008/08/27/grading-on-the-curve/

  • Anonymous
    September 21, 2008
    PingBack from http://alsedi.com/blog/zachem-pridumyvali-novyj-interfejs-microsoft-office/