Links and articles about technology, design, and sociology. Written by Rian van der Merwe. Follow @RianVDM
I’ve been thinking about the topic of my latest A List Apart column for a while, but I was just too scared to write it. I mean, what right do I have to talk about work and privilege? But I ran the idea by my amazing editor, and since she was really supportive and enthusiastic about it, I went for it.
So I wrote Why?:
Why we work—and what kind of work we do—is a function of our privilege and our history as much as it is a function of our choices and our dedication.
I hope you enjoy reading it, and take something from it. This one took a while to get right.
Aaron Shapiro makes some interesting observations in The Next Big Thing In Design? Less Choice:
Technology has revolutionized the way we live our lives and do business, but it has done a terrible job reducing the stress of so many decisions. Industry by industry, great digital design has eliminated middlemen from the economy and put users in control, making it fast and easy for us to determine what we want and purchase it directly, whether on a computer or over a phone. Now, with unlimited opportunities for decision-making, we have essentially made ourselves the middlemen in our own lives.
The enjoyment, and even fetishization, of the beautifully designed experiences we rely on to make these decisions has distracted us from our original goal of simplifying our lives. We’ve forgotten that the ultimate purpose of an interface is to make things simpler.
That last sentence is interesting. “We’ve forgotten that the ultimate purpose of an interface is to make things simpler.” I understand and agree with the sentiment, but the statement got me thinking about how I would define the purpose of a user interface.
In the context of modern UI design I would probably want to adjust that statement a little bit to say that, “The ultimate purpose of an interface is to enable users to accomplish their goals within a system easily, in a way that also fulfills pre-defined business goals.” I’m sure there’s lots to argue about and disagree with in that statement as well, but it’s an interesting thought process to go through.
The rest of the article goes a little too deep into #NoUI territory for me. I’m more with Cennydd on that one:
This is the world desired by some #NoUI adherents. It’s not a world I recommend.— Cennydd Bowles (@Cennydd) April 21, 2015
But there are still some interesting examples. Well worth going through.
Chris Baraniuk looks at the futility of things like traffic signal buttons in Press me! The buttons that lie to you:
Some would call this a “placebo button”—a button which, objectively speaking, provides no control over a system, but which to the user at least is psychologically fulfilling to push. It turns out that there are plentiful examples of buttons which do nothing and indeed other technologies which are purposefully designed to deceive us. But here’s the really surprising thing. Many increasingly argue that we actually benefit from the illusion that we are in control of something—even when, from the observer’s point of view, we’re not.
I’m not usually one to freak out about kids and technology use, but Bruce Feiler makes some interesting points in Hey, Kids, Look at Me When We’re Talking:
Dr. [Clifford Nass, a communication professor at Stanford University] told me about research he was doing that suggested young people were spending so much time looking into screens that they were losing the ability to read nonverbal communications and learn other skills necessary for one-on-one interactions. As a dorm supervisor, he connected this development with a host of popular trends among young people, from increased social anxiety to group dating.
That’s pretty alarming.
In The Machines Are Coming, Zeynep Tufekci talks about the kind of tasks that are being automated by machines:
Today, machines can process regular spoken language and not only recognize human faces, but also read their expressions. They can classify personality types, and have started being able to carry out conversations with appropriate emotional tenor.
Machines are getting better than humans at figuring out who to hire, who’s in a mood to pay a little more for that sweater, and who needs a coupon to nudge them toward a sale. In applications around the world, software is being used to predict whether people are lying, how they feel and whom they’ll vote for.
This is not a new topic. Back in 2012, Kevin Kelly proclaimed in Better Than Human: Why Robots Will — And Must — Take Our Jobs:
It may be hard to believe, but before the end of this century, 70 percent of today’s occupations will likewise be replaced by automation.
At the end of last year Claire Cain Miller wrote for the New York Times that As Robots Grow Smarter, American Workers Struggle to Keep Up:
Although fears that technology will displace jobs are at least as old as the Luddites, there are signs that this time may really be different. The technological breakthroughs of recent years — allowing machines to mimic the human mind — are enabling machines to do knowledge jobs and service jobs, in addition to factory and clerical work.
Who knows if this fear is going to turn into reality or not — there are lots of counter-arguments as well (For example, Nicholas Carr has a really interesting historical perspective in Should the Laborer Fear Machines?).
Still, I find the discussion fascinating — especially as it relates to the balance of power in workplaces. Tufekci continues:
Machines aren’t used because they perform some tasks that much better than humans, but because, in many cases, they do a “good enough” job while also being cheaper, more predictable and easier to control than quirky, pesky humans. Technology in the workplace is as much about power and control as it is about productivity and efficiency. […]
This is the way technology is being used in many workplaces: to reduce the power of humans, and employers’ dependency on them, whether by replacing, displacing or surveilling them.
Maybe that’s the real cause for concern here. Not that jobs might go away (although that’s certainly worrisome too), but that power will continue to shift to employers and away from employees.
Back in 2012 I wrote the following about a blind spot I’ve noticed in Agile development:
Problem solving involves not just iteration, but also lots of variation. This often requires time to get it wrong a few times, which doesn’t fit comfortably with the concept of release dates. See, the problem with integrating Agile and UX is not that designers want to hang on to “slow and heavy documents,” “big upfront design”, or whatever you want to call it. The problem is that each iteration further solidifies the chosen path, and there is no time to stop and ask if you’re going in the right direction.
All of that came flooding back when I read Jeff Patton’s Common Agile Practice Isn’t for Startups, in which he puts a slightly different spin on the issue that Agile is not very good at helping us figure out what to build. His solution is a product discovery process (something that’s obviously near and dear to my heart as well). He places the discovery process in the context of a different kind of velocity than is usually measured in Agile—trying to learn as much as possible about customers and the product:
There’s something very different about this process loop: the primary measure of progress during discovery isn’t delivery velocity, it’s learning velocity. And sadly, we can’t measure it in features or stories completed. And, even worse, we can’t plan two weeks of it in detail because what we learn today can and should change what we do tomorrow.
He goes on to describe a Nordstrom process:
Notice the Nordstrom Lab still uses time-boxes, 1 week in this case. But, they didn’t start the time-box by predicting how much they’d deliver, but with learning goals in mind. Then they iterated around the build-measure-learn loop as fast as they could.
The post is hard to quote from, so really, just go ahead and read it. It’s a very interesting approach to making discovery part of a regular Agile process.
Elea Chang takes on the “full-stack employee” idea in a great critique called The Full-Stack Employee and The Glorification of Generalization:
Hidden inside that “full-stack employee” manifesto is the idea that tech equals work and work equals life. Despite all the talk of learning and growing, the full-stack employee is primarily focused on conquering domains within the tech industry. But there have always been ways to impact the world outside the workplace. Unfortunately, the continuous pursuit of professional skillsets tends to diminish the boundaries between work and everything else, leaving you with less and less time to actually grow as a human being.
I’m very much in agreement with this. Many companies still go out of their way to reward people who work extra long hours, even if that comes at the expense of time spent with family (or, as Elea points out, volunteering outside of work).
Two recent articles made me think again about how weird journalism and publishing has become because of the internet and social media. In Instagram’s TMZ Jenna Wortham describes a very successful celebrity gossip “site” (what should we call these things now?) that exists primarily on Instagram:
Angie explained to me that Instagram perfectly suited her vision for The Shade Room: image-centric and interactive. For her purposes, Instagram was the equivalent of WordPress. When she started the feed a year ago, her goal was to accumulate 10,000 followers in the first year. She accomplished that in only two weeks. Angie started by posting about people at the bottom of the celebrity hierarchy (minor reality stars, mostly) and worked her way up to bigger names, building her loyalties slowly. Eventually, readers started sending her tips and videos via Instagram’s direct-messaging feature. Now, The Shade Room has more than half a million followers on Instagram alone.
Of course, this “business” is one decision by Instagram away from total collapse, but for now it’s an amazing success story.
The second article continues the media’s fascination with Buzzfeed. From Adrienne LaFrance and Robinson Meyer long and very interesting The Eternal Return of BuzzFeed:
BuzzFeed is a successful company. And it is not only that: BuzzFeed is the rare example of a news organization that changes the way the news industry works. While it may not turn the largest profits or get the biggest scoops, it is shaping how other organizations sell ads, hire employees, and approach their work. BuzzFeed is the most influential news organization in America today because the Internet is the most influential medium—and, in some crucial ways, BuzzFeed demonstrates an understanding of that medium better than anyone else.
Culturally, economically, even politically: BuzzFeed is so influential because it is still in ascendance. We don’t yet know how big this publication will get, how sweeping and lasting its effects on the American media sphere will be. “We’re still really small,” Peretti insists. “You have Disney and Viacom and Time Warner—the really big media companies are giant compared to us.” But BuzzFeed’s growth has been relentless in recent years. It shows no signs of slowing. Peretti is deliberately and aggressively building his company to be big. “The Internet isn’t for small companies,” he said last year.
It’s hard not to admire the way Buzzfeed understands how the internet hive mind works. Let’s not forget that they were the first publication to figure out what the internet is really for:
Benedict Evans wrote a characteristically brilliant analysis in What does Google need on mobile? Here’s a taste of his conclusion about Google’s challenge going forward:
The key change in all of this, I think, is that Google has gone from a world of almost perfect clarity—a text search box, a web-link index, a middle-class family’s home—to one of perfect complexity—every possible kind of user, device, access and data type. It’s gone from a firehose to a rain storm. But on the other hand, no-one knows water like Google. No-one else has the same lead in building understanding of how to deal with this. Hence, I think, one should think of every app, service, drive and platform from Google not so much as channels that might conflict but as varying end-points to a unified underlying strategy, which one might characterize as ‘know a lot about how to know a lot’.
Don’t miss this article, the whole thing is great.
Marty Cagan continues his excellent product autonomy series by discussing what happens when teams get large enough to split up their code bases. In Autonomy vs. Ownership he describes his preferred way of dealing with the situation where a team needs a change in a different codebase to get one of their features implemented:
The alternative model is informally known as the “open source” model although to be clear this is not about open sourcing your code, it’s just called that because this is how much of the open source community operates. In this model, if the drivers team needs a change to the riders team’s code, then they could either wait for the riders team to do it, or they can actually make the change themselves, and then request that the riders team review the change, and include it if they’re okay with it (known as a “pull request”). This means that you are telling the software management system that you’ve made a change to the software, but the owner of that software needs to review the changes before they are actually approved and incorporated.