Menu

Links and articles about technology, design, and sociology. Written by Rian van der Merwe.

The strangeness of the Flappy Bird phenomenon

Flappy Bird gif

Flappy Bird — that insufferable iOS game — has been in the news quite a bit recently. One of the more incensed “reviews” comes from Paul Tassi’s Winged Fury:

Flappy Bird is not a game. It’s an addictive collection of pixels you don’t win, you simply play until you’re frustrated enough to delete it. And yet, it’s tapped into some primal sense of accomplishment for this, the attention-deficit world we live in. Have nothing to do for more than a few moments? Whip out your phone and flap your way through some pipes. You’ll be dead in seconds with each attempt, and therefore the game can kill any span of time from half a minute to hours. [...]

The time spent there is lost forever. The skill required to achieve high scores is wasted potential with no benefit whatsoever to the player. To brag about a score here is to boast to a friend how many times you managed to punch a brick wall before stopping.

Ian Bogost’s The Squalid Grace of Flappy Bird starts like this:

Games are grotesque.

And it he only gets angrier from there:

Flappy Bird is a perversely, oppressively difficult game. [...] Flappy Bird is not difficult to challenge you, nor even to teach the institution of videogames a thing or two. Rather, Flappy Bird is difficult because that’s how it is. It is a game that is indifferent, like an iron gate rusted shut, like the ice that shuts down a city. It’s not hard for the sake of your experience; it’s just hard because that’s the way it is. Where masocore games want nothing more than to please their players with pain and humiliation (thus their appropriation of the term “masochism”), Flappy Bird just exists. It wants nothing and expects even less.

Look, way too much time has been wasted discussing how much time people are wasting on Flappy Bird. Still, it’s just so exactly like the internet to latch onto a phenomenon like this and then blow it completely out of proportion — and in the case of Forbes and The Atlantic, turn it into some highbrow existential reflection. It’s why I hate the internet, and it’s why I love the internet, all wrapped up into one silly little game.

But perhaps the last word should go to Bogost:

For no matter how stupid it is to be a game, it is no less stupid to be a man who plays one.

Always choose meaning over recognition

I’ve been thinking about this whole “being online” thing quite a bit over the past week or so, so James Shelley’s The Overinflated Currency of Personal Brands struck quite a chord:

What happens when the fame contagion infects an entire society? [American historian Daniel Boorstin] speculated that “The quest for celebrity, the pressure for well-knownness, everywhere makes the worker overshadow the work.” Increasingly we will go about our lives and work not actually concerned with the living and working itself, but with being known for our lives and work. Our lives and work become nothing but source material for the promotion of our personalities. Ultimately, achievement and accomplishment come to mean nothing, if they are not mechanisms for propagating our individual cult stories.

I see this more and more online, and it’s a worrisome trend — this tendency to measure the value of our work by the number of people who see it and comment on it. Our search for meaningful work should always outweigh our search for recognition. This idea of individual stories and meaning remind me of Donald Miller’s words in one of my favorite books, A Million Miles in a Thousand Years:

If [it’s true about] a good story being a condensed version of life — that is, if story is just life without the meaningless scenes — I wondered if life could be lived more like a good story in the first place. I wondered if a person could plan a story for his life and live it intentionally.

Planning a good story for our lives has nothing to do with “well-knownness” and everything to do with the amount of meaning we pack into each day. I know I’m being a bit sentimental today, but it’s because our family is on the verge of a very big change, and much of it is driven by a renewed appreciation for living life with greater intention. Over the past few years I’ve seen my decisions increasingly being influenced by a desire for my daughters to one day say to their friends, “My Dad wasn’t afraid to take risks.” So that’s what we’re doing…

Switch Design

Anthony Colangelo explains how he uses a technique called Switch Programming to help solve coding problems:

We gave each other 30 seconds to explain our intended results, and nothing else. Then, we traded computers and got to work.

I was working on a fairly new project with a codebase that Mark really hadn’t been in, and Mark was working on an old project that I hadn’t touched for over a year and a half (long story). Point is, neither of us were intimately familiar with the project we were debugging. It didn’t matter—we knew what had to happen, and we dug in.

Within five minutes, our issues were solved. We explained to each other what we did to fix the problems, we learned a little something, and we got back to work.

This sounds like a great approach to solve design challenges as well. If you’re not sure how to get past a particular design problem, explain the intended result to someone, and give them 5 minutes to try to sketch a few solutions. It will probably not be perfect, but it’s a great way to get some fresh thinking to bump you back on track.

The issue with @HistoryPics and lack of attribution

Internet

I rarely find myself in a position where I want to “engage” with the company who makes my toothpaste, so I generally don’t follow brands on Twitter (or any non-individual accounts, for that matter). But I recently indulged in a couple of guilty pleasure accounts. Faces in Things posts pictures of (wait for it) things that look like faces, and Behind the Scenes posts (wait for it) behind-the-scenes pictures from iconic movies.

I found the accounts interesting and funny for a while, but then I started noticing a few things that made me uncomfortable. Two things started bugging me:

  1. Photos are never attributed to their original sources, and
  2. These accounts (and several similar ones, most notably History In Pictures) seem to be run by the same people who just end up retweeting their own stuff to create some kind of snowball effect

I started suspecting that these accounts were created to amass hundreds of thousands of followers, only to then be sold to the highest bidder who wants to pimp their products to an unsuspecting audience. It’s a common practice on Facebook (I’ve written about that in The dirty world of Facebook EdgeRank Optimization), but I haven’t seen it on Twitter before.

Anyway, I unfollowed the accounts and didn’t think much more of it.

And then I read Wynken de Worde’s It’s history, not a viral feed1, in which he tears these Twitter accounts apart. He focuses quite a bit on the attribution issue, confirms that most of the accounts exist only for the bait-and-switch sale2, and then concludes:

Feeds like @HistoryinPics make it impossible for anyone interested in a picture to find out more about it, to better understand what it is showing, and to assess its accuracy. As a teacher and as someone who works in a cultural heritage institution, I am deeply invested in the value of studying the past and of recognizing that the past is never neutral or transparent. We see the past through our own perspective and often put it to use for our own purposes. We don’t always need to trace history’s contours in order to enjoy a letter or a photograph, but they are there to be traced. These accounts capitalize on a notion that history is nothing more than superficial glimpses of some vaguely defined time before ours, one that exists for us to look at and exclaim over and move on from without worrying about what it means and whether it happened. [...]

And so @HistoryInPics makes me angry not for what it fails to do, but that it gets so many people to participate in it, including people who care about the same issues that I do. Attribution, citation, and accuracy are the basis of understanding history. @HistoryInPics might not care about those things, but I would like to think that you do. The next time you come across one of these pictures, ask yourself what it shows and what it doesn’t, and what message you’re conveying by spreading it.

The inaccuracy of these accounts (see, for example, 12 More Viral Photos That Are Totally Fake) is a huge deal, of course. But for some reason it’s still the lack of attribution that grates me the most. Back in 2009 I adopted Chris Messina’s use of slashtags on Twitter to attribute sources using the syntax “/via @name”. I’ve been using it ever since, and I saw many people who did the same. But it’s a practice that has slowly diminished over the past few years3.

Why is it a big deal to tell people where we found something? Isn’t the web free and open and we’re all one and blah blah blah? Sure, but the web is also fundamentally about hyperlinks. The ability to follow links back to their original sources — with plenty of pleasant detours along the way — is the core of what makes the internet such a wonderful place. Do you ever get happily lost on Wikipedia? Exactly. So if we stop caring about attribution, we rob others of the ability to find more people and topics that they might be interested in. I’ll say it again: It’s not about making the source feel important. It’s about helping others follow the breadcrumbs to places of interest.

So I guess the point of this post is to join in Wynken’s plea that we look at these new crop of Twitter accounts more critically, and call them what they are: get rich quick schemes. And to ask that we remember to take attribution seriously. It’s the right thing to do.


  1. Link via The Loop

  2. Also see Alexis Madrigal’s interesting reporting in The 2 Teenagers Who Run the Wildly Popular Twitter Feed @HistoryInPics

  3. There were other attempts at attribution syntax, of course — most notably the much-mocked curator’s ǝpoɔ

The awkwardness of IM chat indicators

I really enjoyed Ben Crair’s essay on those indicators that show you when someone else is typing during an IM chat — which he calls The Most Awkward Feature of Online Chat. I thought I was the only one who started getting anxious when I see that indicator sit there for more than a few seconds. A quote from Clive Thompson sums it up:

Hmmm, why did they start typing and then stop? Obviously, most of the time this isn’t an issue, but if you’re involved in a sensitive or emotionally charged conversation, these questions of pausing can become emotionally charged themselves!

And this:

But knowing when your partner is typing can also have the unsettling effect that Thompson described: It makes visible the care with which we pick our words. And the more visible this care becomes, the more the reader distrusts the message. Conversation is supposed to feel natural, after all. The quip is less funny if it’s not offhanded. Flirtation is not so flattering if it appears to require labor. And the apology can seem less heartfelt when you know it’s been self-lawyered.

The Internet is hard.

Intent and Design

Jared Spool in Design is the Rendering of Intent:

Over the last year, we’ve started explaining design as “the rendering of intent.” The designer imagines an outcome and puts forth activities to make that outcome real.

He believes this is one of the reasons why, even though they’re both government sites, We the People is so much better than the enrollment system for Global Entry:

There’s no technical reason why the We The People team had to end up with the design they did. It could’ve been frustrating and hard-to-understand, just like the Global Entry site or many other government web sites. The only reason either team ended up with these sites is because they came to their designs with different intentions. [...]

Many of our design deliverables, such as wireframes, prototypes, and style guides, are as much about getting agreement on what we intend as they are to move our intentions closer to done. But the deliverables themselves do not produce the designs. It’s having all the people on the team, from the product managers through the developers, sharing the same intention.

We need to look at our design process as a way to come to a single intention as much as it is to make that intention real in the world. And it’s with the lens of this new definition that we can see we still have much work to do before every design will be a great one.

User intent is not a new design concept, but I like how Spool extends that to deliverables. Most deliverables are part of an essential process to get everyone to agree on the intent of the product, as well as the user intents that the product aim to deliver. Through this lens the right deliverables are closely related to Jobs to be Done, and therefore still very relevant and useful.

In real life

Justin Jackson’s This is real life is probably one of my favorite posts of the year so far. I don’t want to spoil it, so I’ll just quote this bit:

You see, I can pretend to be cool on the Internet, but in real life I’m just a dad in a bathrobe.

Justin, from a fellow dad in a bathrobe:

High five

Smart cities as citizen-inspired networks

Every time someone writes about smart cities my ears perk up. Sommer Mathis just published a great interview with Anthony Townsend (the author of the new book Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia). From The Rise and Fall and Eventual Rise Again of the ‘Smart City’, quoting Townsend:

But our “smart” cities are going to look much more like the web, where there’s going to be a lot of things deployed by individual decision, talking to each other through open standards in very ad hoc, loosely knit ways.

And what I like about that is that kind of architecture is actually what a good urbanist would tell you builds a good city. You build an open grid, you allow people to customize the pieces of it that they have jurisdiction over, and you get this fine-grained, resilient, vibrant kind of system with a lot of complexity, as opposed to a very controlled, hierarchical system that’s actually fairly brittle when it comes under stress.

It’s great to see smart city thinking evolve away from large centralised systems to citizen-inspired networks. Some more interesting articles on this topic:

The problem with “do what you love”

Bored

I’ve been thinking about Miya Tokumitsu’s In the Name of Love for days now. Miya argues that the mantra “Do what you love” devalues work and hurts workers:

There’s little doubt that “do what you love” (DWYL) is now the unofficial work mantra for our time. The problem with DWYL, however, is that it leads not to salvation but to the devaluation of actual work—and more importantly, the dehumanization of the vast majority of laborers. [...]

“Do what you love” disguises the fact that being able to choose a career primarily for personal reward is a privilege, a sign of socioeconomic class. Even if a self-employed graphic designer had parents who could pay for art school and co-sign a lease for a slick Brooklyn apartment, she can bestow DWYL as career advice upon those covetous of her success.

If we believe that working as a Silicon Valley entrepreneur or a museum publicist or a think-tank acolyte is essential to being true to ourselves, what do we believe about the inner lives and hopes of those who clean hotel rooms and stock shelves at big-box stores? The answer is: nothing.

It’s a tough critique, and at first I was looking for reasons to dismiss the argument. But the more I think about it, the more sense it makes to me. The “do what you love” idea is related to another theme I often see on social networks. It’s some variation of the message “If you don’t want to go back to work after vacation, you should find a job that doesn’t make you want to go on vacation all the time.” This has always felt wrong to me. I love my job — I really do. But that doesn’t mean I can’t also enjoy spending several days with my family, hiking, climbing, and hopefully with my nose buried in a zombie book.

This doesn’t mean I’m lazy, it doesn’t mean my job isn’t meaningful, it doesn’t mean I don’t like the people I work with. I will just always find a different kind of enjoyment in actively doing nothing than I do when I work. And it turns out that leisure time — and in particular, being bored — is really good for us. Nicholas Carr says this in The web expands to fill all boredom:

We don’t like being bored because boredom is the absence of engaging stimulus, but boredom is valuable because it requires us to fill that absence out of our own resources, which is process of discovery, of doors opening. The pain of boredom is a spur to action, but because it’s pain we’re happy to avoid it. Gadgetry means never having to feel that pain, or that spur. The web expands to fill all boredom.1

So I just think that it’s ok to split up work and leisure. If we’re lucky we get to have jobs that we love doing — and we should absolutely work hard to accomplish that goal. But spending time away from work (or working on side projects) is important and healthy, and we shouldn’t be afraid to acknowledge that. It doesn’t diminish your job satisfaction or dedication if you enjoy being on vacation.

Anyway, I’ll have Miya have the last word:

Do what you love and you’ll never work a day in your life! Before succumbing to the intoxicating warmth of that promise, it’s critical to ask, “Who, exactly, benefits from making work feel like nonwork?” “Why should workers feel as if they aren’t working when they are?” In masking the very exploitative mechanisms of labor that it fuels, DWYL is, in fact, the most perfect ideological tool of capitalism. If we acknowledged all of our work as work, we could set appropriate limits for it, demanding fair compensation and humane schedules that allow for family and leisure time.

And if we did that, more of us could get around to doing what it is we really love.


  1. Also see Joseph Epstein’s excellent essay on boredom called Duh, Bor-ing

Product strategy doesn’t start with a technology choice

James Stout explains how Responsive Design won’t fix your usability issues for you. If your site is bad before the redesign, those problems won’t just magically go away once you’ve gone responsive. It’s a good article, and I especially like this bit:

But in the face of all this great technology, it’s more important than ever to avoid the “features for features’ sake” pitfall. Maintain that ever-present purpose and goal and be deterministic concerning whether these technologies help drive that goal, or whether they’re being included simply because they’re new. Use only those features you need and make them truly spectacular when you do.

The mobile revolution is nothing new, yet the battle to bring it about rages on. Understand that success on the web is not defined by the tools in your arsenal, which any web-MacGyver can use, but by the strategy you employ, including the very manner in which you approach the field.

It reminds me of one of my favorite Product Management quotes, from Barbara Nelson’s Who Needs Product Management?:

It is vastly easier to identify market problems and solve them with technology than it is to find buyers for your existing technology.

Des Traynor’s Product Strategy Means Saying No is also a great article on the topic of product focus and market needs:

Identifying and eliminating the bad ideas is the easy bit. Real product decisions aren’t easy. They require you to look at a proposal and say “This is a really great idea, I can see why our customers would like it. Well done. But we’re not going to build it. Instead, here’s what we’re doing.”

And since I haven’t linked to Michael Wolfe’s answer to Why is Dropbox more popular than other programs with similar functionality? yet this year, I might as well do it now and get it over with:

“But,” you may ask, “so much more you could do! What about task management, calendaring, customized dashboards, virtual white boarding. More than just folders and files!”

No, shut up. People don’t use that crap. They just want a folder. A folder that syncs.