Expanding the role of wireframes

I’ve done a fair bit of hand-wringing about wireframes myself (see here and here), so I read Travis LaFleur’s Toward a More Expansive View of Wireframes with great interest. I really like Travis’s approach of expanding wireframes beyond their traditional use:

Rather than thinking of the wireframe as a low-fidelity, grayscale snapshot of what a page will eventually look like, coming further and further into focus as the design is refined, we can embrace a broader view of the wireframe as a thematically rich conceptual model — one that is now depicting page-level details, reinforcing previous models of the system as a whole.

Click through to his post for some examples.

The core elements of healthy, productive teams

Charles Duhigg has a long feature in the New York Times called What Google Learned From Its Quest to Build the Perfect Team. It includes a summary of really fascinating research on the core elements of a healthy, productive team:

As the researchers studied the groups, however, they noticed two behaviors that all the good teams generally shared. First, on the good teams, members spoke in roughly the same proportion, a phenomenon the researchers referred to as “equality in distribution of conversational turn-taking.” On some teams, everyone spoke during each task; on others, leadership shifted among teammates from assignment to assignment. But in each case, by the end of the day, everyone had spoken roughly the same amount. “As long as everyone got a chance to talk, the team did well,” Woolley said. “But if only one person or a small group spoke all the time, the collective intelligence declined.”

Second, the good teams all had high “average social sensitivity”—a fancy way of saying they were skilled at intuiting how others felt based on their tone of voice, their expressions and other nonverbal cues. One of the easiest ways to gauge social sensitivity is to show someone photos of people’s eyes and ask him or her to describe what the people are thinking or feeling—an exam known as the Reading the Mind in the Eyes test. People on the more successful teams in Woolley’s experiment scored above average on the Reading the Mind in the Eyes test. They seemed to know when someone was feeling upset or left out. People on the ineffective teams, in contrast, scored below average. They seemed, as a group, to have less sensitivity toward their colleagues.

So, effective teams are built on equality and empathy. Seems terribly obvious, of course, but I feel like very few teams actually live these values. We can do better.

Build software to feed the world, not eat it

Kevin Slavin’s Design as Participation is one of those articles that stays with you for days. There are multiple ways to read it, but I view it as a thoughtful critique of my primary field of focus: user-centered design (UCD). There have been other discussions on this topic, most notably Cennydd Bowles’s excellent Looking Beyond User-Centered Design, and Mike Long’s Stop Designing for Users. Kevin’s is a worthy addition to the debate.

Kevin’s main thesis is that UCD is selfish (since it puts a user at the center of everything), and we should instead see users as active participants in a design:

Broadly, UCD optimizes around engagement with the needs, desires and shortcomings of the user and explores design from the analysis and insight into what the User might need or want to do. Simply, it moves the center from the designer’s imagination of the system to the designer’s imagination of the user of the system.

But we are no longer just using computers. We are using computers to use the world. The obscured and complex code and engineering now engages with people, resources, civics, communities and ecosystems. Should designers continue to privilege users above all others in the system? What would it mean to design for participants instead? For all the participants?


Some contemporary work suggests that we are not only designing for participation, but that design is a fundamentally participatory act, engaging systems that extend further than the constraints of individual (or even human) activity and imagination. This is design as an activity that doesn’t place the designer or the user in the center.

If this all seems overly academic, fear not, practitioners! Kevin shows several examples of “Design as Participation” in his essay, and also ends with this call to action:

A new generation of designers has emerged, concerned with designing strategies to subvert this “natural default-setting” in which each person understands themselves at the center of the world.

These designers do this by engaging with the complex adaptive systems that surround us, by revealing instead of obscuring, by building friction instead of hiding it, and by making clear that every one of us (designers included) are nothing more than participants in systems that have no center to begin with. These are designers of systems that participate – with us and with one another – systems that invite participation instead of demanding interaction.

We can build software to eat the world, or software to feed it. And if we are going to feed it, it will require a different approach to design, one which optimizes for a different type of growth, and one that draws upon – and rewards – the humility of the designers who participate within it.

It’s always hard to see one’s views challenged, but Kevin does it in the best possible way here. He understands UCD and why it came about, he presents a compelling argument about its issues, and then he shows us how we can do better.

I don’t think this means the end of UCD (or even that we should stop using its basic methods like personas and usability testing). But I do agree that we need to shift our thinking so that we’re less concerned about the success of an individual user (or groups of users), and more concerned about how different systems interact with each other. Alan Cooper touches on this topic in his wonderful essay The Edges:

What each organization has to do today is to regard the edges of its products with as much diligence and attention as they give the center. The quality of both their outside system connections (known as application program interfaces, or APIs) and their user interfaces demand levels of expertise and investment that have historically fallen short.

As nebulous as Kevin’s idea of “building software to feed the world” sounds, I like the sound of it. I like the hope and the desire to do good in the world that it communicates. We need to make that idea concrete in our daily design worlds, and I don’t think we quite know how to do that yet. But every practical mission starts with a grand vision, and I quite like this one.

Meetings and email: maybe they’re not so terrible after all

There are two things everybody in business (say they) hate: meetings and email. So the past few years have seen a great many startups that try to re-invent, revolutionize, and strategerize the crap out of meetings and email. However, recently we seem to have come to a disappointing realization: meetings and email are the worst ways to get things done, except for all the other ways.

In Meet Is Murder Virginia Heffernan goes deep on the topic of meetings: why we hate them, what people have tried differently, and how we just can’t seem to quit them. Her resigned conclusion hints at what really might be the source of our meeting hatred:

What’s so bad about meetings, after all? At bottom, they are nothing but time with your fellows. Which suggests that hating meetings might be akin to hating traffic, families or parties—just another way to express our deep ambivalence about that hard fact of existence: other people.

Meanwhile, in Slack, I’m Breaking Up with You Samuel Hulick shares his dismay with Silicon Valley’s latest darling company. These kinds of articles are inevitable at this point—we’re almost certainly approaching 6 PM SVT (Silicon Valley Time) for Slack. Anyway, Samuel wrote a break-up letter to Slack, but at times it reads more like a subtle “Please come back!” letter to email. For example:

While it’s true that email was (and, despite your valiant efforts, still very much is) a barely-manageable firehose of to-do list items controlled by strangers, one of the few things that it did have going for it was that at least everything was in one place.

And this:

When work gets done over email, there’s a general expectation of a response buffer of at least an hour or two. In you, though, people can convene and decide on anything at any time.

Also this:

When I started feeling like our relationship was getting to be just a little too much, I decided to take a few days off. That was never a problem when I was with email—I’d just fire up a vacation autoresponder and be on my merry way.

I’ve always liked email (which, sorry, I know, is like a Portlander saying “Oh you just found out about Kale? I’ve been eating Kale all my life!”), and felt that the bigger problem is not the system but the way we deal with it. I tried Google Inbox and that Mailbox thing that Dropbox bought and shut down, but I could just never get into a groove with a system that tries to sort my email for me. Instead I just do something that works really well for me: I read every email, and file each message in the appropriate place when I’m done dealing with it. That’s it.

I’m also not as against meetings as I used to be. My rules there are equally simple: always walk out of a meeting with an artifact. This could be a whiteboard sketch or a note about a thing you need to go research—it doesn’t really matter. Just walk out of there with something. Meetings should focus on facilitating the things that meetings are good at: collective thinking. Meetings that energize me are the ones where most people are standing, working together on a common goal. From customer journey workshops to design studio sessions to analyzing usability testing results, there are plenty of useful ways to spend our times in meetings. That’s my only criterium for a good meeting: make progress.

These guidelines are probably way too simple for the majority of businesses and people. But I do think that when we try to “reinvent” meetings and email we’re trying to solve a people problem with technology, and that’s just never going to work. Technology can help, for sure, but at its core we need to figure out why we hate email and meetings, and then fix that first. And I think the main problem with meetings and email is that we don’t spend enough time taking personal responsibility to make them more effective. Until we stop trying to offload our personal responsibility on the shoulders of technology, nothing will change.

Quote: Jon Kolko on product development that focuses on people first

The traditional way of building software focused on requirements, driven by a competitive view of the market, and responded to perceived needs with additional features and functions. Our new way of developing products focuses on people first: on spending lots of time with them in order to build an almost intuitive sense for their emotional journey. We design with that journey and those emotions in mind, and as a result, we can produce great products that people love.

—Jon Kolko, Understanding our product design strategy and ecosystem.

New e-book: Practical User Research for Enterprise UX

I worked with the wonderful folks at UXPin to write a short e-book on how to overcome some of the challenges of doing user research in large organizations. From the introduction:

Once a company grows over a certain size, the internal politics and number of people involved in every decision increase so much that it becomes virtually impossible to stay focused on fulfilling user needs and business goals. Instead, the focus turns inward to the opinions and whims of individuals inside the company. Add the complexity of designing B2B products to the mix and, well, things go bad very quickly.

When an abundance of stakeholders are involved in a product, user research is the only way to focus a whole team on the real needs and goals required for success. It’s also the only way to get people out of the habit of thinking “Well, I want this, so everyone else must want it too”—a view that I find much more common in enterprises than in smaller organizations.

If that sounds familiar to you, you’ll hopefully find the e-book useful. I discuss why it’s often so hard to get support for user research in enterprises. Then I provide some advice on how to sell the value of user research. Finally, I offer some practical tips for addressing the subtle differences of conducting research in larger organizations with users who aren’t buyers.

You can download the (free) e-book here: Practical User Research for Enterprise UX.

Why everyone should do usability testing (even you)

I vividly remember the first usability test I attended. I was a brand new employee at eBay, and I walked into a dark observation room with no idea what to expect. I came out of that room 60 minutes later with the strangest mix of emotions—heartbroken that our product clearly had usability issues that made users incredibly frustrated, but also relieved and excited that we now had the information we needed to fix those issues. I became a usability testing convert for life, and have been making it a part of my product design process ever since.

I’m deeply passionate about this methodology and how it makes us better designers (and improves the experiences of our users), so I don’t think it should be something that we reserve only for the “highly trained” to do. Usability testing is something all of us should do as a regular part of our design process. But that doesn’t mean it’s straight-forward—there are many pitfalls and ways to generate bad data with usability testing. So I wanted to write a brief introduction to the methodology and why it’s so important, as a foundation for people who haven’t had training in the method but would like to make it part of their process1.

So, let’s start at the beginning.

Usability testing is a very powerful (and shamefully underused) user research methodology—when it is used correctly. In fact, usability testing is probably the only method that can be relied on to consistently produce measurable improvements to the usability of a product. Bruce Tognazzini once said:

Iterative design, with its repeating cycle of design and testing, is the only validated methodology in existence that will consistently produce successful results. If you don’t have user-testing as an integral part of your design process you are going to throw buckets of money down the drain.

But that all depends on the all-important “when it is used correctly” caveat. To make sure we do that, we need to understand when to do usability testing, and what to use it for.

When to do usability testing

To answer the “when” question I need to bring back the three buckets of user research I first discussed in Making It Right:

  • Exploratory Research is used before a product is designed, to uncover unmet user needs and make it easier to get to product-market fit. Ethnography and contextual inquiries are the most-used methods in this bucket.
  • Design Research helps to develop and refine product ideas that come out of the user needs analysis. Methods include traditional usability testing, RITE testing (rapid iterative testing and evaluation), and even quantitative methods like eye tracking.
  • Assessment research helps us figure out if the changes we’ve made actually improved the product, or if we’re just spinning our wheels for nothing.

Usability testing is best used during the design research phase of a product. Ideally you’ll have an interactive prototype or some other lightweight interface to work with. It needs to be detailed enough to make sense to a user, but not so detailed that you’re reluctant to make changes based on feedback. Of course, you can also do usability testing on an existing live product, as long as the team has an appetite to make changes based on the insights that come back.

Usability testing shines during the design research phase since it plays on its strengths as a way to uncover the issues with an existing product or prototype. Trying to shoehorn usability testing into one of the other user research phases leads to trouble, since the nature of the data you get from it simply won’t help you make good decisions (i.e., don’t use it to try to decide what products to build, or if something you built objectively improved user satisfaction).

What to use usability testing for

This leads into what usability testing is good at: refining a product. It’s not good at finding out what to build (unless it’s combined with an ethnographic component). To put a finer point on what usability testing is most useful for, here’s a much-simplified diagram to put it in context with some other research methods.

What usability testing is for

We use methods like analytics and surveys to understand what happens in the product. We use analytics to figure what users do, and we use surveys or other interview techniques to figure out what they say about the experience. The problem is that this doesn’t help us understand why something happens, and without that information we won’t be able to fix any of the problems we come across. That’s where usability testing comes in.

What makes usability testing so perfect for understanding the issues with an interface is that it is an observational research method. It’s not about asking people what they think about an interface. It’s about showing them an interface, giving them tasks to do in that interface, and then watching them as they go through those tasks. We can ask them questions about the experience, but that’s just to provide context.

At its core, usability testing means that we observe users as they make their way around an interface, and use that data to understand what issues we need to fix. So, for example, if we see in our analytics that there is a large drop-off in our checkout flow, usability testing can help us figure out why that drop-off happens, and how to fix it.

Haters gonna stop hating

I’ve seen usability testing abused in several ways that have ended up giving it a bad name in some circles. Here are some guidelines to keep in mind to help break through those prejudices.

First, don’t confuse usability testing with focus groups. They are very different methodologies, and they are certainly not interchangeable. Focus groups are good for some marketing purposes—to understand brand sentiment or positioning. It is a terrible way to get feedback on an interface. Usability testing is so good at what it does precisely because it is a 1:1 methodology. There is no groupthink, no way to get influenced by other users. In that sense, it is controlled environment. Focus groups are anything but.

Focus groups

Second, remember the golden rule that usability testing is about observation. It can’t tell you which interface people like more than another, so don’t try to use it to settle those disputes. It’s the wrong question to answer anyway. It doesn’t matter what users like—it matters what they can use effectively to accomplish their goals. So usability testing is not “lightweight A/B testing”, as I’ve heard it described. It is meant to be part of an ongoing iterative design process with the goal of improving the product incrementally.

Finally, remember that you don’t need to be in a large organization or have tons of money to do usability testing. This is a methodology that scales really well. For startups who just have an afternoon to get some feedback, you can take some paper prototypes to a coffee shop. For large companies who need to convince a bunch of stakeholders to make changes, you can run a series of formal usability testing sessions. Whatever works—and don’t be mistaken, every little bit helps.

Go and make it so

I want to end this introduction with a small call to arms. Usability testing is an inherently uncomfortable methodology because it assumes and embraces the fact that your product isn’t perfect. That’s a difficult thing to make peace with—especially as a designer. But taking that position is the only way your product is going to get better. You can’t fix something that you don’t think is broken. Clayton Christensen made a similar point in The Innovator’s Dilemma. He calls this mindset “discovery-based planning”:

Discovery-based planning suggests that managers assume that forecasts are wrong, rather than right, and that the strategy they have chosen to pursue may likewise be wrong.

Investing and managing under such assumptions drives managers to develop plans for learning what needs to be known, a much more effective way to confront disruptive technologies successfully.

Or to repeat one of my other favorite quotes: “Design like you’re right, listen like you’re wrong.” Usability testing gives us a proven process to understand what we got wrong so we can get more of it right. That makes it a methodology we should all invest in more.

  1. Who knows, maybe I’ll turn it into a longer, very practical series if there’s interest. 

Why it’s more difficult to prioritize features than problems

Daniel Zacarias’s Moving from Solutions to Problems is a must-read for all product managers, and anyone who’s involved in product prioritization. Daniel’s main thesis is that prioritizing problems results in much better products than prioritizing features—and I wholeheartedly agree with him. He addresses many issues with focusing on features, but the one that really resonated with me is that it’s much harder to prioritize features:

Products and features are versions of a solution to a problem. What this means is that by thinking in terms of the former, the problem they’re solving gets more difficult to grasp. Either because it’s a non obvious problem, or the product/feature are poor solutions for it.

In practical terms, this makes it much harder to prioritize a list of features than a list of problems. There are added layers of indirection that make us evaluate priorities in a different way. It gets difficult to determine the intent and expected impact from a feature. On the other hand, a problem (“low number of transactions”) can more easily lead to an objective (“increasing number of transactions per customer per month by 30%”).

Quote: Chloe Green on the ROI of user experience

Numerous studies have found that every dollar spent on UX brings in between $2 and $100 dollars in return. Forrester revealed that ‘implementing a focus on customers’ experience increases their willingness to pay by 14.4 %, reduces their reluctance to switch brands by 15.8 %, and boosts their likelihood to recommend your product by 16.6 %’.

–Chloe Green, The business of user experience: how good UX directly impacts on company performance.

The benefits of prioritizing customer retention over revenues

Horace Dediu has a characteristically astute analysis of Apple’s business model in Priorities in a time of plenty. The part I’m particularly interested in is where he discusses how Apple prioritizes their product roadmap:

Conventionally, product development is filtered through a sieve of metrics, market sizing and impact on top/bottom income lines. These “financial” measures of success are considered prudent and optimized for return on equity (also known as the maximization of shareholder returns).

But this can be a toxic formula. The financial optimization algorithm always prioritizes the known over the unknown since the known can be measured and is assigned a quantum of value while the unknown is “discounted” with a steep hurdle rate, and assigned a near zero net present value. Thus the financial algorithm leads to promoting efficiency at the expense of creation. Efficiency may be the right priority when times are difficult and resources are scarce but creativity is the right priority in a time of plenty. And abundance is what being big is all about.

The difficulty is that creativity is hard to quantify, and therefore hard to measure, and therefore hard to prioritize—particularly in large enterprises. Horace speculates that “the creation and preservation of customers” is Apple’s primary focus (above revenues), which changes the way they prioritize:

Seen this way each centralized resource allocation question can be assumed to be prefaced with “In order to create/preserve customers should we…?”

This leads to answers quite different from questions that start with “In order to sell/profit more should we…?”

Much to digest here, particularly around the role of managers to identify the right balance for prioritization, and the right metrics to measure if your primary goal is, in fact, “the creation and preservation of customers”.


  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. ...
  8. 113