Interesting if a bit old (April 2004?) article... One bit I strongly disagree with:
It’s not something every programmer can learn. Most programmers don’t have any aptitude for UI design whatsoever. It’s an art, and like any art, it requires innate ability. You can learn to be a better writer. You can learn to be a better illustrator. But most people can’t write and can’t draw, and no amount of practice or education is going to make them good at it. Improved, yes; good, no.
You can learn to write, and you can learn to draw. Plenty of people do. And I'm pretty sure that you can learn to design good UIs too.
Sure, having an innate ability for it helps, but it's not a sine-qua-non.
I agree with you. More often, what I've seen is that engineers aren't interested in it. It is tedious in ways that are not interesting, often involving working with non-technical people.
Most engineers I've worked with find things like "yes, it really matters that those labels are lined up pixel perfect" to be annoyances at best. We tend to work at a deeper level, thinking behind the screen, so we don't often see the screen.
Most engineers I've worked with find things like "yes, it really matters that those labels are lined up pixel perfect" to be annoyances at best. We tend to work at a deeper level, thinking behind the screen, so we don't often see the screen.
This statement suggests to me that you missed the entire point of Gruber's essay. To whit:
This idea, that the hard work of development is in building the underlying foundation, and that the easy part is writing a “GUI wrapper”, has been the Linux/Unix way all along...
UI development is the hard part. And it’s not the last step, it’s the first step.
I'm with Gruber. Good UI is not aligning pixels after the fact. Aligning pixels is to usability as formatting source code is to programming. I abhor the idea that programming is a "deeper level" than usability. In fact, it's the other way around: Usability is a deeper level than programming.
Let me give you an example. Dave Winer had to edit a lot of hierarchal text in the form of Pascal programs. He then got the idea of editing hierarchal text in the form of outlines, and an entire category of software was born. That category was later subsumed by other programs, to the point where nearly everything we use today has some sort of outline editing capability.
UI design was not arguing about which symbols represent collapsed and expanded headlines, or whether the target of dragging an outline should highlight the parent or hilight the space between elements where the dragged outline will go.
UI design was thinking up the idea of editing outlines instead of editing characters and words.
p.s. That being said, I agree with you as well. Many so-called UI designers don't get Gruber's point either, and they will natter endlessly about pixels and colours and CSS grids while having nothing but tired cliché opinions about the actual mental model and functionality of software.
When a "designer" is really a stylist, it is very tiresome to work with them. Since they have nothing else to contribute, they become infuriating nitpickers over trivia just as you suggest.
This statement suggests to me that you missed the entire point of Gruber's essay.
You are right that I made my point overly trivial. It was supposed to be the extreme case of understanding the importance of UI design: how it looks and works is very important.
I've designed many interfaces, but I am not a designer. Understanding the user, how they use the system, and how the system needs to react to that use is what matters.
You can build a great UI without being pixel-perfect, but, in my experience, engineering teams who fight pixel-perfect will fight the effort that it takes to understand what a great UI for their system is.
It's not so much that it's not interesting, or that involves working with non-technical people, or that it requires pixel perfect alignment, or that the screen isn't a deep enough level. It's that it is all about working with other people as human beings. It's about empathizing with people who are different than you are, learning to see through their eyes, giving them what they need, and not getting any payoff in terms of interesting technical problems to solve. It's probably the exact opposite of what draws most people, myself certainly among them, to programming in the first place.
That said, I also agree that pretty much anyone can learn to be a good UI designer, just as anyone can learn to draw well. As in any field, talent helps and the very best people will always be the ones with talent, but it is much more important to want it enough to put in the necessary work. If you are a programmer who only cares about technical mastery then you will never be a good UI designer, and, as you said, there are a lot of programmers out there who are mostly interested in technical mastery.
I agree. However, if you won't do the easy stuff (which is critical to making a good interface), how likely are you to do the hard stuff (understanding the user to know how they are going to use the software)?
I really wish I had some form of artistic skill. I've spent a lot of effort over the years trying to learn how to draw better, how to design user interfaces, how to make things look good on a screen. And still, I'm terrible at it. Worse, I suspect I can continue on this path indefinitely and I'll never be considered "good" at it.
That's the point he's making in that sentence you quote. You can teach yourself to do it, but you can't teach yourself to be good at it.
There at least one glaring flaw in his argument. He implies you can't bolt a good UI onto open source software after finishing the core functionality, yet the article revolves around CUPS which Apple themselves took and bolted a good UI onto. Strangely he makes no mention of this fact.
Apple also hired the author and now own the copyright to CUPS which is still Free Software (http://www.cups.org/). It's worth noting for those who weren't early adopters of Mac OS X that printer compatability sucked pretty hard before the adoption of CUPS since Mac OS X was both new and unpopular at the time. Is it technically "usable" if it doesn't work? Apple clearly saw the benefit of jumping on the bandwagon of an open standard, and this let them concentrate on other things like the UI. The same could be said for much of Mac OS X.
Both the positive OSS examples he mentions, Firefox and Camino, were likewise primarily GUIs bolted onto the Mozilla core after the fact.
Gruber is mostly right about usability being hard and about it being a skill but he generally doesn't get Open Source, nor Free Software, nor Unix even though Apple now dabbles to varying degrees in all these things. And his lack of insight shows. How many hackers would agree that "More often than not, you get what you pay for" holds true for software.
But the 20:20 hindsight award goes to the second last paragraph:
"It’s easy to ridicule the estimated 2006-or-2007 ship date for Longhorn, the next major release of Windows. But do you doubt for a moment that Longhorn will provide more improvements from Windows XP than desktop Linux will gain during the same period?"
I agree that you can find a way to bolt good UI over anything, much as we've done amazing things with the "stateless" web. But there is a large truth underlying his argument, which is that you will have much better results if you design for usability first not last, and also that the amount of work required to make good usability can be an order of magnitude more than the amount of work required to build a raw API.
My big takeaway when I first read it was that you can't think that 80% of the job is the implementation and that the UI will be 20% of the work to be done after you hammer out its functionality.
That the UI is important and hard work a standard opinion for Gruber and Apple fans generally (myself included).
This particular article is about how Open Source software cannot achieve this, mostly due to limits placed on it by Gruber's own stereotypes. Why can't it be commercial? Why can't it hire UI guys with training and insight. In fact it does both these things.
I think you might have the order backwards. Instead, it goes something like this, for Apple:
1. Design a printer interface that works.
2. Write the back-end, or to the extent that it's possible, adopt an existing back-end, and fit its features to the designed interface; add any required features to the backend that it doesn't already include.
This is an extremely different approach than the method Gruber is criticizing:
1. Make a super awesome printer framework that does everything anyone could ever want it to do.
2. Figure out how to expose all those features in a user interface.
Empirically, the former method works much better than the latter. And it's basically the same method Apple used with Safari and iTunes as well, or for that matter OS X as a whole.
I really don't think it's that simple. Things I believe but can't prove:
* Open Source Usability, just like OS security, engineering, marketing, QA, translation, installation, update schedule, funding model etc. etc. will be different from the closed source analog. Doesn't make either "better" a priori, but does mean you can't transplant one to the other wholesale and have it work the same, nor can you easily compare one against the benchmark of the other.
* That Open Source Usability will in fact follow your two steps with a crucial difference. It will still involve super awesome frameworks that do everything but will be followed by figuring out how to expose different subsets of those features in multiple UIs. Potential case study the UI horror that is Evolution (because it tried to copy proprietary Outlook) spawning multiple new frontends like the pimlico suite that expose just contacts or calendar and the new Anjal mail client:
* Corollary of the above: if you're copying a proprietary UI wholesale then you're probably doing it wrong. Steal the ideas yes but re-interpret them in a way that makes sense for open source platforms.
* iTunes is a usability disaster since commercial and proprietary pressures have forced a CD player, CD ripper, music library, internet radio, online music store, party music player, network jukebox, video player, iPod manager and god knows what else into a single app. This is actually the anithesis of Apple design principles but is forced on them by the need to push their stuff onto Windows. If you're working on a music player for the Linux desktop, for the love of god, stop copying iTunes.
* Safari/Webkit is what it is because it a) started with code from KHTML and Mozilla, b) continues to be an open source project that is used as a component in multiple applications, including two completey different browser UIs from Apple alone (Safari and Mobile Safari, the latter being a far more impressive UI. Safari itself is mostly just minimalist, and that's why I use it, with a few odd gee-whizz features thrown in.).
* There's plenty of good software on Mac OS X that adds well thought out Cocoa UI onto a bunch of existing open source frameworks and libraries. Writing stuff from scratch is stupid when the libraries are there for the taking.
* OSS and Usability focus are not in conflict. When Panic were so anal about their UI for Mac OS X app Coda that they decided to include a programming font they based it on an open source one. This is not a zero-sum game. Mac OS X is shot through with open source from top to bottom, 1st to 3rd party.
* proprietary pressures can corrupt user interfaces. (See iTunes example above) Why doesn't Internet Explorer have spell-checking? Because it threatens Word. Why doesn't Microsoft Office warn you that it's going to remove all the formulas from your ODF spreadsheet? Because it threatens their standards hegemony. Some proprietary software constantly gets in your way because it is trying to impose its will on you. User hostile software is not known for good UI.
Having been a Mac IT consultant/monkey for a number of years in a past life I'd argue the printing system in Mac OS X is still flaky, annoying and poorly designed... but to each his own. :-P
My take is this: The general culture for Linux is that there are and should be more than one way to do it, versatility is encouraged and that placing significant emphasis on one way over the other is frowned upon.
Given that this attitude is pervasive amongst all aspects of Linux, this is like a sausage enthusiast trying to get his neighbor excited about his vacation photos from a kill-floor when the neighbor just wanted to stop over and have a wurst.
"But do you doubt for a moment that Longhorn will provide more improvements from Windows XP than desktop Linux will gain during the same period?"
If I were reading that at the time the article was written I would probably be nodding my head in agreement. But I think the latest Ubuntu is generally much more usable than Vista.
My 70 year old mom is sick to death of Vista, I'm going to be installing Ubuntu for her in a couple weeks, so I guess I'll get the chance to find out if I'm right.
In what world does the "archetypal nontechnical user" have two computers connected by Ethernet?
Ignoring the difference between Ethernet and 802.llb/g/n, this was commonplace even when the article was written. Dad has a printer hooked up to his desktop. Aunt Tillie borrows Mom's laptop to look up a recipe on foodnetwork.com. Voila, two computers connected over a local network, and Aunt Tillie wants to print out her recipe on Dad's printer.
In regard to the printer sharing problem, it's pretty much solved today thanks to zeroconf (marketed as Bonjour by Apple). It makes addressing computers and advertising services in the local area network much easier. I wish more developers would take advantage of zeroconf.
These paragraphs ought to be required reading for anyone calling themselves a programmer:
Oh, I see: the problem is that Linux developers are just so fucking smart that they overlook the problems faced by “dumb users” such as dear old A.T. But everything will fall into place with just a little attitude adjustment.
Well, allow me to retort.
UI development is the hard part. And it’s not the last step, it’s the first step. In my estimation, the difference between:
* software that performs function X; and
* software that performs function X, with an intuitive well-designed user interface
isn’t just a little bit of extra work. It’s not even twice the work. It’s an entire order of magnitude more work. Developing software with a good UI requires both aptitude and a lot of hard work. Raymond acknowledges neither.
It’s not something every programmer can learn. Most programmers don’t have any aptitude for UI design whatsoever. It’s an art, and like any art, it requires innate ability. You can learn to be a better writer. You can learn to be a better illustrator. But most people can’t write and can’t draw, and no amount of practice or education is going to make them good at it. Improved, yes; good, no.
Conversely, some people who are good UI designers aren’t programmers. But the rock stars are the guys who can do both, and they are few and far between.
If there’s a glib, nutshell synopsis for why Linux desktop software tends to suck, it’s this: Raymond and his ilk have no respect for anyone but themselves.
They have no respect for the fact that UI design is a special talent.
They have no respect for the fact that good UI design requires a tremendous amount of time and effort.
And, most importantly, they have no respect at all for real users. The idea that GUI software needs to be designed for “dumb users” — which is Raymond’s own term, and an indication of what he really means when he refers to dear old A.T. — is completely wrong.
Great software developers don’t design for morons. They design for smart, perceptive people — people just like themselves. They have profound respect for their users.
Show Me the Money
This is not a Mac thing. Well, it is, but what I mean is that it’s not just a Mac thing. There are a lot of developers creating good UI’s for Windows, for the web, and for other platforms. And, yes, there are even developers creating great UI’s for software running on Linux.
But the undeniable truth is this: successful open source software projects tend to be at the developer-level, not the end-user level. E.g., successful open source projects have programming interfaces, not user interfaces. Apache, Perl, Python, gcc, PHP, the various SQL databases. The list of fantastic open source developer software is long.
The list of fantastic open source GUI software is short. This is not a function of chance.
The open source revolution has done nothing to change the fact that the best-designed, most-intuitive user interfaces are found in closed-source commercial software.
I’m not saying all commercial software is well-designed, nor that all free software is poorly-designed — what I’m saying is that software that does provide a well-designed, intuitive interface tends to be closed and commercial. The bigger the software, the more likely this is to be true.
The most obvious explanation is that the open source model does not work well for producing software with good usability. Everything in Raymond’s article hints at this truth. (Not to mention MPT’s aforementioned essay, which addresses this directly.)
Good user interfaces result from long, hard work, by talented developers and designers.
It’s not something every programmer can learn. Most programmers don’t have any aptitude for UI design whatsoever. It’s an art, and like any art, it requires innate ability. You can learn to be a better writer. You can learn to be a better illustrator. But most people can’t write and can’t draw, and no amount of practice or education is going to make them good at it. Improved, yes; good, no.
You can learn to write, and you can learn to draw. Plenty of people do. And I'm pretty sure that you can learn to design good UIs too.
Sure, having an innate ability for it helps, but it's not a sine-qua-non.