Links and articles about technology, design, and sociology. Written by Rian van der Merwe. Follow @RianVDM
I like the work Growing Agile does, so I clicked on Karen Greeves’s Agile & UX: What’s wrong with working one sprint ahead? with great interest. I’ve been struggling with this issue myself, but I’ve just never come across a better way to integrate UX and Agile, so I was very interested in their viewpoint. The post definitely got me thinking, but I have some issues with their objections. In the interest of letting you in on the thought process, I’m going to take some lines from the post and explain why I don’t agree1. Let’s start with their main concern with UX working one sprint ahead:
The main issue with this is one of focus. If the UX designers work one sprint ahead of the rest of the team, then the UX designers and the team actually never work on the same stuff at the same time. This means they are never focused on the same thing.
This is (partially) true, but isn’t it also true for individual developers on the team? They’ll be working on different stories. That’s the reason we have standup meetings: to make sure that those individual tasks work together to finish the whole.
I say it’s only partially true, because people can focus on more than one thing… So while the UX team is working on the next sprint, they can still be available to help with changes and questions on the current implementation work.
It doesn’t make sense to have the same standup meeting because you aren’t working on the same stuff.
Again, very few people are working on the same stuff. And this is actually an opportunity for the UX team to highlight any issues they have that developers can help them with.
It’s harder to help each other out because you are focused on finishing different stuff.
I don’t really get this point — we should be able to shift focus, and helping each other in different contexts often produce better results. Isn’t that the principle that code reviews are based on as well?
The people working ahead consider their deliverable some interim artefact, not working software, so there is a handover of both knowledge and responsibility.
In our context the UX team’s deliverable is working software in the pattern library, which uses the same frameworks and environment as our front-end developers, so that they can keep going seamlessly. So it’s possible to solve this problem.
People don’t see themselves as part of the same team, so you end up having a UX team and a dev team. This inhibits collaboration and communication.
I don’t agree with this point, either. We’re always going to have different skills on teams. Product Owners have different skills than the development team too. In fact, I think it can increase collaboration if developers are asked to provide input while the UX team is working on an upcoming feature.
The people working behind aren’t involved in the decision making that happened without them and as a result they don’t understand the reasons for certain design choices, this often leads to assumptions, and rework.
This doesn’t have to be the case. We have developers involved from Product Discovery all the way through prototyping, testing, and final specifications.
I’m not advocating writing code before an UX work has taken place. I’m saying the whole team should be involved in that work. Obviously the UX designer(s) take the lead here, but everyone on the team needs to see users use their product and understand the user journey map.
Ok, on this we agree!
In each sprint the whole team should set aside time to look at UX designs for the next sprint, as well as usability testing whatever has been completed.
I agree with this, and it’s how we work. But where I’m confused is that this still means that the UX team is working one sprint ahead of development — they’re working on “UX designs for the next sprint.” The only difference from how it’s usually done when UX and Agile works together, is that it formalizes the time that developers spend on being involved in the UX process.
If I understood that correctly, then it turns out we agree after all, and maybe this post wasn’t necessary…
This is an important discussion, and I think they add valuable points to it, so please don’t see this as some kind of takedown attempt. I know (and like) Karen and Sam, but I also believe in public, respectful disagreement. So that’s my disclaimer! ↩
I really like the James Buckhouse explains in is article Story Map:
Halfway between a storyboard and a treasure map, it bundles the value and functional flow of your product with the delight people might feel at each step in your product. It sketches the UX flow without locking it down, and it delivers the gist of an idea and the emotional gestalt without prematurely belaboring the details.
A story map depicts how your product works and why it matters—but crucially—it does not explicitly spell out the final design, UI or in-the-weeds UX logic. It does, however, hold the product vision and works as a rubric against which the team can make better and faster decisions.
The article has some great examples worth looking at. As a big customer journey mapping fan, this is definitely something I want to try out.
I have one quibble though, and that’s with the name… “Story mapping” is already a very well established process in Agile development. Here are just a few articles about it:
- The new user story backlog is a map
- Visualizing Progress with Agile Storymapping
- Winnipeg Agilist: How to create a User Story Map
Semantics are important, so my suggestion would be to simply use a synonym:
@buckhouse Maybe just use a synonym and go with “narrative mapping”?— Rian van der Merwe (@RianVDM) August 2, 2014
Claire Evans discusses how much more awkward break-ups have become in the age of social media. From Luddite love:
It’s time to end it online. I’m not just talking about the pedantic tick-box of Facebook ‘relationship status’: there are images to untag, emails to delete, an ‘unfriending’ to coordinate. There is the careful unravelling of the social web.
In a sense, every relationship now exists on two levels. The moments we spend in one another’s company, the neurochemical buzz of proximity, and the communion of shared silence: these are real. But just as physical places now have their geolocated overlays, every relationship, too, throws a digital shadow — and depending on the individuals involved, it can loom larger than the people who cast it. As we increasingly live our social lives in public, in a medium that retains the traces of our social noodling, the record and the relationship itself can approach a point of indistinguishability.
Some smart thoughts from Scott Sehlhorst in Classifying Market Problems:
Many teams struggle with backlogs or roadmaps which appear to be a collection of “a bunch of stuff.” Most teams try and address the problems that manifest from having a giant list of stuff by getting better at managing giant lists. This is treating the symptom, not the cause. If you’re trying to juggle hundreds of requirements, the problem isn’t that you have hundreds of requirements, the problem is that you don’t know why you have requirements.
Which reminds me of this cartoon, because that’s what happens when you get better at managing lists instead of getting better at figuring out how to make your product useful:
Of course, it’s also at this point that users tend to take matters into their own hands:
Want to read some philosophical pontification about selfies? Well, I’m here to help you out. Start with the recent Can you have self-worth without self-love?, in which Simon Blackburn believes we can do better:
If culture shifts one way, it can also shift back. Is it possible to imagine a reversal, so that something approaching a social contract, or a feeling of public spirit, a contempt for indecent expenditure, an embarrassment at vulgar display, or simply a desire to leave as modest a footprint as possible, begins to take over our sense of what we can expect from ourselves and others? We know that there are cultures in which it is poor form to shout that you are a taller poppy than any other.
But perhaps, above all, we should encourage the joyous, subversive spirit of mockery. If there are few things more awful than the arrogance and hubris of conceit, is there anything more ridiculous than a display of vanity? The word itself carries its own condemnation (Latin: vanus, empty; vanitas, emptiness). We can learn not to care about display, and not to crave the admiration of others. We could even learn to display fewer selfies.
Once you’ve whet your appetite, fill your Instapaper queue with these:
- A Good Angle Is Hard to Find
- Google Makes You Smarter, Facebook Makes You Happier, Selfies Make You A Better Person
- The Documented Life
- In Praise of Selfies: From Self-Conscious to Self-Constructive
You’re welcome. Seriously though, it’s a pretty interesting phenomenon, and like most new things, we’re in that phase where it’s either really horrible and going to destroy everything, or it’s making us better people, depending on who writes the article.
Who doesn’t love Ira Glass? What a legend. His I’m Ira Glass, Host of This American Life, and This Is How I Work feature in LifeHacker is really good:
I’d just say to aspiring journalists or writers—who I meet a lot of—do it now. Don’t wait for permission to make something that’s interesting or amusing to you. Just do it now. Don’t wait. Find a story idea, start making it, give yourself a deadline, show it to people who’ll give you notes to make it better. Don’t wait till you’re older, or in some better job than you have now. Don’t wait for anything. Don’t wait till some magical story idea drops into your lap. That’s not where ideas come from. Go looking for an idea and it’ll show up. Begin now.
I also loved this response to the question What’s your best time-saving shortcut/life hack?:
I’ve got nothing. Reading other people’s answers to this question on your website today made me realize I live my life like an ape. I eat the same breakfast and lunch everyday, both at my desk. I employ no time-saving tricks at all.
So, I guess in related news, I bought making-it-right.com and I don’t know maybe it will become something, maybe not…
Hugh Howey is the author of WOOL, one of my favorite sci fi series. His essay Your Fear is My Opportunity is a rant on Amazon and self-publishing, and I really appreciate his perspective as someone who has gone through the hard process of self-publishing:
The things I advocate for: Reasonably priced e-books, for publishers to take risks and do exciting things, for us to embrace the future of storytelling and allow it to coexist with the past, to release all editions of a work at once, to get rid of DRM, to mix up genres and do something fresh and new … these are all things I’ve wanted as a reader for longer than I’ve been writing.
I am a reader first. And I want more readers. Selfishly, as a reader, I want more readers. I want to see airports full of people staring at books, e-readers, and tablets laced with text. Not people staring at cell phones, Candy Crush, Facebook, or authors’ blogs. I want book culture everywhere. I want interactions with strangers to be about what they’ve read lately. I want my social media feed to be all about books. I’m an addict, and I want to get other people hooked. Maybe that’s a bad thing. I don’t care.
If you haven’t read WOOL you should give it a try. Great storytelling whether you’re a sci fi fan or not.
Huge thanks to Tower 2 for sponsoring the site this week!
Version Control is an essential tool in today’s web and software world and a fundamental part of the workflow in teams large and small.
We believe that version control with Git should be easy. And why not beautiful?
It’s not about the command line or a Gui. It’s about how to be more productive, avoid mistakes and make your life easier.
With Tower 2, we’ve worked hard to make the best Git client even better. Completely redesigned and reengineered, Tower 2 comes with more than 50 additional features, a brand new design and an outstanding performance.
We believe we’ve made a tool that helps you to become a better professional.
Download the free trial today and see for yourself.
A couple of articles about work and technology caught my eye this week. First, Claire Cain Miller describes how Technology, Aided by Recession, Is Polarizing the Work World:
[A new working paper from the National Bureau of Economic Research], which analyzed data from the Current Population Survey from 1976 to 2012, illustrates that the recession had a disproportionately large effect on routine jobs, and greatly sped up their loss. That is probably because even if a new technology is cheaper and more efficient than a human laborer, bosses are unlikely to fire employees and replace them with computers when times are good. The recession, however, gave them a motive. And the people who lost those jobs are generally unable to find new ones, said Henry E. Siu, an associate professor at the University of British Columbia and an author of the study.
Now, combine that problem in the mid-paying job market with an issue Thomas B. Edsall pointed out a few weeks ago in The Downward Ramp:
Just one example: the drying up of cognitively demanding jobs is having a cascade effect. College graduates are forced to take jobs beneath their level of educational training, moving into clerical and service positions instead of into finance and high tech.
This cascade eliminates opportunities for those without college degrees who would otherwise fill those service and clerical jobs. These displaced workers are then forced to take even less demanding, less well-paying jobs, in a process that pushes everyone down. At the bottom, the unskilled are pushed out of the job market altogether.
So, college graduates are pushed into mid-paying jobs, and those jobs are being replaced by technology. Not good.
Meanwhile, in opposite world, Louise Aronson writes about The Future of Robot Caregivers (if you’re counting, that’s three for three on the New York Times):
We do not have anywhere near enough human caregivers for the growing number of older Americans.
Zeynep Tufekci’s excessively titled Failing the Third Machine Age: When Robots Come for Grandma is a good critique of that piece:
Let me explain. When people confidently announce that once robots come for our jobs, we’ll find something else to do like we always did, they are drawing from a very short history. The truth is, there’s only been one-and-a-three-quarters of a machine age—we are close to concluding the second one—we are moving into the third one.
And there is probably no fourth one.
Humans have only so many “irreplaceable” skills, and the idea that we’ll just keep outrunning the machines, skill-wise, is a folly.
Put all these pieces together and you get a very scary vision of the future of jobs. The good news — I think — is that job != work.
The future of jobs might be bleak, but the future of work certainly isn’t. Technology might be taking our jobs, but it’s also giving us new ways to be creative. To be entrepreneurs. To work. As programs like Girls Who Code continue to grow, I’m increasingly optimistic about my daughters’ futures. They might not get a “regular” job one day. But my role as a parent is not to prepare them for a job anyway. It’s to foster in them the tenacity and grit to learn how to think big and make things. I’m excited about that.
Designers are more important in today’s digital world than ever. You are still responsible for creating flexible design systems and finding the styles that will connect with the user. Now you just have to do it faster. By ditching the PSD and streamlining the design process, you aren’t just providing the client the value of saved time, you are making yourself more valuable. And ultimately, the real goal of the Post PSD Era is about creating more value — for your customers, for your team, and for you.
The graphic designer’s outcomes are just different now, even if they still use Photoshop. Instead of producing pixel-perfect mockups, their time is spent creating visual inventories, style tiles, and other artifacts that are essential in an atomic design environment.