Untangling listicles and data

This is a picture of my son’s ball of yarn. A bad listicle is like a knotted ball of yarn… sometimes there’s a lot of untangling to do to get to the facts.

a bad listicle is like a knotted ball of yarn

I remember the day a managing editor uttered the word “listicle.” I giggled and was torn between a Beavis and Butthead snort and a disgusted of-course-only-a-guy-would-say-that reaction.

Shortly thereafter, I designed my first one (the effects of the recession on Wisconsin, if you’re curious). It was actually a charticle because it had pictures (The American Journalism Review doesn’t like charticles. I suspect they like listicles even less.)

Wired recently wrote in defense of listicles. It is reassuring to find out that listicles won’t, in fact, give you ADHD. The Guardian is more tongue-in-cheek skeptical, but kindly proffers a few literary examples to redeem the form. I think they lay it on a little thick in number 7 but it’s a good read. More seriously, the trend spawned a new genre, the apocalypsticle, roundly lambasted by Politico a few months ago as “dumbing up” the tragedy of Ukraine.

I won’t go into our obsession with lists. People like them, and you already know why they do. I confess that I’ve never been successful at writing a listicle. I’m too wordy. I rebel at the constraints of a finite number. I just can’t get the thing to hang together to my satisfaction. I need paragraphs to do that. So perhaps I’m just a little jealous.

But I am concerned that our collective obsession and fascination with the genre is not making us stupider, per se, but that—coupled with our penchant for so-called data nuggets—it is becoming too acceptable for journalists and others to conjure up a type of content that does not always responsibly use data and facts to support a story.

Listicles: If you’re going to compare apples to bicycles, go ahead. But don’t pretend you’re just comparing apples.

My biggest beef with the quick comparisons use in listicles is that they make it too easy to cherry-pick disparate data points and thread them together into a seemingly logical order to support a seemingly logical claim. They don’t always use logical comparisons. You’ll frequently see different periods in time compared against each other, or slightly different variables compared as well. If you don’t look closely, you can get the wrong message. And if you do look closely, you’ll be confused. It’s okay to compare things from different time periods, but you need to explain that. You also need to explain which things change from one comparison point to another.

The order in which points are presented in a list can also hide unfeasible comparisons.

If I tell you that, in 2010, 1 million green widgets were produced in the U.S., compared to 5 million red ones produced in 2012, you might think this is a ridiculous comparison and you’d be hard-pressed to understand what, if any, trend existed here. The time period is different, as is the variable.

But what if I separate those comparisons with other items in a list? You might lose track.

1.  In 2010, 1 million green widgets were produced in the U.S.

2. In 2014, China is the largest manufacturer of widgets, with the U.S. ranked at number 5.

3. Over the past 5 years In the U.S., the majority of widgets have been produced in the South.

4. Nationally, widget manufacturing plants are closing down.

5. In 2012, there were only 5 million red widgets produced in the U.S.

Now think back to how often you have come across this in a list.

Lists should offer data in order to provide perspective and context

Too many lists and listicles are simply a series of disparate data points that offer the reader no meaningful way to compare numbers with broader context.

If I tell you that there are “x” number of people living in dire poverty in the United States, I owe you a bit more than that.

I need to tell you what percentage that number is out of the broader population, and I need to define that population. I need to tell you what “dire” means. Better, I should give you a dollar amount of poverty (and cite that definition to a credible source) so that you’ll understand the context. I should probably give you the time period for this data. And in a second bullet, it would be helpful if I further provided information about change in this figure over time–preferably a long period of time to discount variances over shorter amounts of time. Here’s a better way to accomplish the first bullet:

Of the [#] million adults* living in the U.S. in 2013, [#] of them (x %) are living at or below 50% of the federal poverty level (an annual income of $5,835 for one person**)

*Adults aged 18-and older
**Federal poverty levels 2014
Citation for data source

The examples above are just that, examples. But the other day I stumbled across Mother Jones’ “10 Poverty Myths, Busted,” It proves my point. I took the time to deconstruct it below.

Not every set of facts lends itself to a list, or a conclusion.

Before you read further, please understand that my notes below are not to undermine the claim of the author, but rather to strengthen it. The notes are not intended to be exhaustive. Reasonable minds can poke holes in them at will. But my point remains the same: We have lists. We have data. We have people very willing to read them and share them. If we have strong facts that actually relate to one another, and we lay those facts out clearly and in a logical order, the reader will draw logical conclusions. But if we don’t do these things, at best, we lead readers down a rabbit hole that leaves them frustrated and confused.

At worst, we turn facts into opinions that are interpreted as facts.

Deconstructing “10 Poverty Myths, Busted,” by Mother Jones

The main problem with this list is the disparate, unconnected (by time, logic, data set) nature of 10 claims that, together, attempt to combat poverty stereotypes. The Mother Jones listicle and its bullets are in bold followed by italics. The text prefaced by “issue” is mine.

1. Single moms are the problem. Only 9 percent of low-income, urban moms have been single throughout their child’s first five years. Thirty-five percent were married to, or in a relationship with, the child’s father for that entire time.*

Issue: To me, this bullet appears to cherry-pick the data. What makes the first 5 years a magic number? What about the first 10, 15 or 18? If there is importance to those first 5 years, context would be helpful.

Issue: It is unclear whether the dads in the relationships are actually living in the household. This matters because whether or not a dad lives in a household affects the income level of the mom and child. If he lives in the household for example, the household income can be higher (if he has an income to contribute), which can affect a mom’s low-income status.

2. Absent dads are the problem. Sixty percent of low-income dads see at least one of their children daily. Another 16 percent see their children weekly.*

Issue: There is a difference between low-income and poverty. The label “low-income” is subjective. The label “poverty” is qualitative (there is a federal poverty level). So if you use a subjective level like “low-income” and don’t define it upfront, you can give the appearance of anything you like, really.

Also note how this bullet is worded. On the flip side, you could also say that a quarter of kids don’t see their dad as often as once a week.

And notice how bullet #1 discusses dads living in the household (presumably), whereas this item discusses dads who live outside of the household.

3. Black dads are the problem. Among men who don’t live with their children, black fathers are more likely than white or Hispanic dads to have a daily presence in their kids’ lives.

Problem: Cherry-picking again. Why single out black dads in this one bullet? There are Hispanic dads who live at or below the poverty level. Ditto white dads. Did MJ simply pick the most appealing (higher) number to make the point stronger?

4. Poor people are lazy. In 2004, there was at least one adult with a job in 60 percent of families on food stamps that had both kids and a nondisabled, working-age adult.

Issue: The time period shifts for these comparisons, making it impossible to compare or understand trends. This bullet dates to 2004. Others (bullets 5 and 6, for example) date to 2012.

5. If you’re not officially poor, you’re doing okay. The federal poverty line for a family of two parents and two children in 2012 was $23,283. Basic needs cost at least twice that in 615 of America’s cities and regions.

Issue: The Economic Policy Institute calculator and date from which this bullet is derived is for 2013. The federal poverty level cited is for 2012. Not a huge deal, but you can expect the numbers to change in a year due to inflation, cost of living, etc.

6. Go to college, get out of poverty. In 2012, about 1.1 million people who made less than $25,000 a year, worked full time, and were heads of household had a bachelor’s degree.**

Issue: Context would be helpful here. 1.1 million people out of how many? And isn’t it expected for new college grads to not make much money? How long have these 1.1 million people cited been in the workforce? Poverty is unacceptable at any level (my opinion here), but telling me that a forty-year-old mother of two with a degree who has been in the workforce for 20 years and is living at the federal poverty level is one thing. Tell me the same thing for a recent grad out of school for a year will send me a different message.

7. We’re winning the war on poverty. The number of households with children living on less than $2 a day per person has grown 160 percent since 1996, to 1.65 million families in 2011.

Issue: This myth is so subjective and vague that it’s hard to dissect. It would be helpful to know if the $2 a day includes or does not include federal assistance (TANF, SNAP, etc.).

8. The days of old ladies eating cat food are over. The share of elderly single women living in extreme poverty jumped 31 percent from 2011 to 2012.

Issue: Clearly, this lacks context to be clear and helpful. What does extreme poverty mean? If you dig into the source report for this, you’ll see on page 3 that extreme poverty is defined as income at or below the federal poverty level. The statistic itself is startling and would be strengthened by an upfront definition of the spectrum of poverty, preferably in dollar figures as well.

9. The homeless are drunk street people. One in 45 kids in the United States experiences homelessness each year. In New York City alone, 22,000 children are homeless.

Issue: To put this into perspective, What is the percentage of homeless kids compared to the percentage of homeless adults with (presumably) addiction problems? The connection between homeless individuals with substance abuse issues and children may be one that is widely discussed, but in terms of statistical comparison here, the pairing is clunky.

10. Handouts are bankrupting us.In 2012, total welfare funding was 0.47 percent of the federal budget.

Issue: It’s difficult to follow the citation, partly due to how you define “2012” and “federal budget.” If you follow the citation, you’ll see that this bullet cites the budget that the president proposed for FY2013 on February 13, 2012 ($3.8 trillion). The key word is “proposed,” and the fact that the budget itself was for FY2013. This was not the 2012 budget, as the wording in this bullet implies. Further, the FY2013 budget itself is hard to pin down. The bullet implies that spending happened as a proportion of an actual budget. But in reality, it’s citing a 2012 figure ($16.6 billion) as a percentage of a proposed budget for a different year (2013), not the actual budget for 2012. Furthermore, remember that the proposed budget is simply a draft, if you will. It must be approved by Congress (remember all the subsequent political wrangling, counter-proposals by Republicans, sequestration, etc. (If you want to know what happened to the 2013 budget however, you can check out CBO’s later analysis.) So saying that total welfare funding in 2012 “was” a percent of a **proposed** budget that was intended for FY2013 is neither correct, nor easy to follow.

Issue: Then there’s how “welfare” is used in the bullet. It’s unclear what that means. Clarifying it upfront would have been more helpful. Presumably the word “welfare” is the $16.5 billion cited by the Center on Budget and Policy Priorities for what is known as TANF (Temporary Assistance for Needy Families – financial aid for some poor families). But you have to do a little digging in the source to ascertain that. And using the word “welfare” could mean TANF, and it could mean SNAP (food stamps) as well. Or something else, as this helpful post on Real Clear Politics points out. This is needlessly unclear, given that the term “welfare” is such a politically charged word.

Sources as cited in the above Mother Jones list:
*Source: Analysis by Dr. Laura Tach at Cornell University.
**Source: Census

So, listicles. Use them wisely and well. And if not, stick to the goofy stuff, not the serious stuff. Like, top 6 reasons this post would have been shorter and more effective if I had used a list.

Mind the gap: Advocacy journalism fills a void

By now, it’s safe to say that the digital ecosystem is shaking things up for journalists. Traditional journalists are turning into brands (Ezra Klein, Vox and Nate Silver, 538, to name a few). Journalists are getting paid for clicks. Social media tools are creating a new breed of reporting through conflict journalism and citizen journalists—coverage that bleeds into news reporting and advocacy. And mission-driven social media sites (like Upworthy) are partnering with advocacy organizations to create serious, in-depth original content, as the Nieman Journalism Lab reported last month. Phew.

Mind the gap: Advocacy journalism fills a void

And now advocacy organizations are getting into the mix. They’re taking the reins by exposing, researching and writing about the issues they care about in a genre of journalism known as advocacy journalism. Advocacy journalism has been around for a while (remember muckraking?). But today’s digital landscape seems ripe for innovation by those that want to take the genre further.

A recent article by Dan Gillmor in Slate’s Future tense project provides a thought-provoking and current look at the nexus of advocacy and journalism today, one that made me want to dig a little deeper into the subject to see where the field is at and what hurdles it faces.

Advocacy journalism is an interesting genre. On the one hand, it seems like a big deal—by injecting a point of view, it appears, at first blush, to upend the sacrosanct “objective reporting” model that is the foundation of traditional journalism. But in fact, today’s so-called traditional journalism is itself rife with points of view (reporters are human, after all, and routinely bring their personal perspectives to the questions they ask and the subjects they cover).

It’s no coincidence that, at the same time as advocacy journalism is getting more attention, investigative reporting in traditional media—the bread and butter of deep, immersive journalism—is diminishing due to shrinking newsroom budgets, capacity, and interest. (The American Journalism Review wrote about it in 2010, and things don’t look that much rosier if you read about revenues and ad dollars in Pew Research Center’s State of the News Media, 2014 report or the internet marketing firm Vocus’s 2014 State of the Media.)

So, resources for investigative reporting in traditional media may be diminishing, but the need itself certainly hasn’t. The immediacy of the internet and social media reporting make the gaps left by traditional news organizations more transparent than ever before. It has opened up the playing field for those who want, and need, to write about social change, and see advocacy journalism as yet another tool for driving that change. It is here that advocacy organizations are stepping in.

Gillmor mentions the Upworthy partnership with Human Rights Watch, Climate Nexus and ProPublica, but he also reminds us of the work of the libertarian Cato Institute, and the ACLU, noting that these organizations are not just writing about their issues—they have invested in hiring talented, investigative journalists to do the work.

One of my earlier posts this year discusses how advocacy organizations are harnessing social media to effect social change on their own terms (I wrote about the MIT Center for Civic Media’s study of the media coverage of the Travyon Martin tragedy, and of how it was framed and defined in part by digital-savvy advocacy organizations). In the same way, advocacy organizations are equipping themselves with investigative journalists to define the things that need fixing in our society, again, on their own terms.

Transparency and bias concerns apply to all reporting, not just advocacy journalism

As with any form of journalism (see a post that I wrote about the importance of trusted messengers correctly reporting the facts), there are always legitimate concerns around the ability of the “reporter” to be transparent about the perspective and bias that he or she brings to a story, especially when money comes into the picture (for example, a journalist embedded in an advocacy organization writing about an issue that is driven by a funder). But one can easily make the argument that journalism has never been immune to this predicament. Media brands are, after all, owned by corporations—remember Michael Bloomberg’s takeover of his newsroom and Murdoch’s editorial biases? The issue is not so much that money is paying for journalism (it always has). Rather, the issue is one of transparency and fairness (something Gillmor acknowledges in his online book, Mediactive).

Most recently, advocacy journalism was roundly dismissed by Buzzfeed’s Editor-in-Chief Ben Smith. When Eric Hippeau, a venture capitalist (and early investor in Buzzfeed), sat on a panel at the Columbia School of Journalism and asked Smith about the fine line between different forms of journalism and advocacy, Smith responded, “Um, yeah, I hate advocacy. Partly because I think, you know, telling people to be outraged about something is the least useful thing in the world.” (The video is here and and a good article with more on Buzzfeed is here.)

That’s kind of ironic given Buzzfeed’s public missteps and its association with the Koch brothers on the issue of immigration reform. I’m not saying that the partnership is in and of itself a concern (Slate’s Dave Weigel described it as a “pro-immigration reform” panel that was very much in keeping with the Koch brothers’ longstanding interest in the issue). But the association is not one to be ignored, either, particularly from a man who claims to hate advocacy. I’m still coming around to the idea that “Buzzfeed” and “journalism” can be conjoined. I don’t say that to be snarky—I say that to mean that all lines are blurring, including newstainment sites like Buzzfeed that are reinventing themselves in the digital journalism mold, whatever that is.

Medialens has a good take on the back-and-forth skepticism around advocacy journalism (“All Journalism is ‘Advocacy Journalism’ “) and offers some clear-eyed perspective by pointing to numerous examples of how ‘non’ advocacy journalism exhibits bias (ranging from uber-left Ira Glass’s omission of the U.S. role in Guatemalan genocide to Jeff Bezos’s 2013 purchase of the Washington Post alongside Amazon’s $600 million cloud-computing deal with the CIA—on the heels of its decision to stop hosting WikiLeaks in 2010).

Journalism is changing: traditional media gatekeepers are going away

As Gillmor points out (and as I’ve written written previously), back in the day, traditional media were largely gatekeepers to reporting. If you were an advocate or an organization with a story and a point of view, you had to get a reporter’s interest and rely on that person to pitch it to an editor. To stand the best chance of success, you had to do the research, get the facts straight, frame the narrative, and package it up so that a reporter could understand it, pick it up, and pitch it. Those days are disappearing, and in their place is a new frontier of blurry gray lines of people and perspectives, all vying for a chance to shape the news agenda of the next hour. Investigative reporting is what gives all of us perspective, makes us take a collective deep breath, and think beyond that next hour.

It’s unsettling, but also an opportunity to fill in the gaps left by the old guard, as long we do it right. So, what’s right?

Doing it right: some things should never change

I recall reading (and tweeting) about Upworthy’s announcement when I read Nieman Lab’s post last month. Given that I work in a policy and advocacy organization that has a keen interest in seeing its point of view accurately and widely expressed in the media, I myself wondered how we could inject ourselves into a similar partnership. And, if we could, what we would say, how we would separate our social passion from the hard and complicated truths that spell out complex political realities? For me, it raised more questions than I could answer. But it’s tremendously exciting to see where others are going.

I’ll be curious to see how (or if) these partnerships help fill the void left by the diminished investment in investigative reporting in traditional newsrooms. And I’m also eager to see what new best practices emerge as a result. But regardless of how things change, the responsibility of transparency has never been greater. And all of these changes add up to the same principles that should never, ever change in journalism—report the facts, be clear and transparent about your point of view, and tell people where your money is coming from.

Dataviz is not a one-way street: fostering critical thinkers

Last year, I wrote two posts about the important editorial role that the designer plays in visualizing data (you can read them here and here). This week, Moritz Stefaner did a much more eloquent (and concise) job of underscoring the sensibility and the responsibility of the designer in crafting a data visualization.

But what I found particularly insightful about Mr. Stefaner’s post is his different characterization of what many of us (including me) typically describe as “telling the story” through data. He challenges that oft-used paradigm and, instead, offers a more compelling mode–the participatory, audience-driven cognitive experience that is the true power of data visualization. To me, this is what I found the most compelling, the power that data visualizations have to create a community of critical thinkers.

The story-telling model, according to Mr. Stefaner, is a one-way street that invokes a limiting, linear “start here, end here” dynamic–one that ignores the true opportunities that data visualization presents. Mr. Stefaner’s more aspirational definition has the reader exploring and creating his/her own experiences through the visualization.

In hindsight, it makes so much sense, right? It’s the interactivity of data visualization beyond sorting, filtering, reading and reporting. Rather, it’s a way to respect and foster the intellectual curiosity of the reader, thus fostering a culture of critical thinkers who go behind the more passive consumption of being “told” a story.

Mr. Stefaner tells us that this is his motivation for creating rich, immersive projects, ones that turn his audiences into “fellow travelers,” as he calls them. I absolutely love this characterization, albeit to me, it is not without some challenges. For example, there are times when I find myself lost in a data visualization that is too complex. Rather than stimulate my intellectual curiosity and propel me deeper into that visualization, I find myself frustrated that I’m being left behind by the author/designer, that I’m missing something important. This isn’t what Mr. Stefaner is suggesting, but it’s worth noting nonetheless.

This is one of the best things that I’ve read in a while, and one I’ll remember.

How are Legos like content? Writing in chunks.

As the mother of a 4-year-old, my life is surrounded by Legos. Daily, I watch my son tear down part of his monster lunar lander to quickly repurpose it as an alien observation tower with a parking garage for “the cars of the aliens.” The kid thinks in chunks, seeing his Legos not as specific blocks, but as cross-functional units ready to be quickly repurposed into other “stuff.”

Legos: cross-functional units ready to be quickly repurposed into other “stuff.”

It’s a good way to think about content, too—be it words or data. There’s no way every single one of your readers are going to read everything that you push out. And its likely that the things you write about have staying power well beyond the day you publish them, right?

But the minute you post your content, it is immediately competing against the news feeds of everything that comes after it. The social streams of Facebook, Twitter, and other social media, plus the news feeds of news aggregators have very short memories. People move on.

Chunks: flexibility in writing and repurposing for later

But if you think and write your content in standalone, succinct “chunks,” (going back to the Lego analogy here), you can use these chunks later. You can combine them and thread them together into an overview narrative, like this morning’s “Three Technology Revolutions” story by the Pew Research Center’s Internet Project. Notice how the main narrative in the story consists of just three short sections (a summary paragraph followed by a large data graphic).

What’s interesting about this is not the data itself, but rather how, if you click the links inside each paragraph, you’ll notice that they link to older articles published last year. And each of those articles is largely concentrated on a data graphic with a small amount of text for context.

As you can see, using (and writing) content in “chunks” allows you lots of flexibility as a writer, and even more flexibility in repurposing your content. (Last year I wrote about taking a similar approach to building infographics.) It’s tough to do this.

Builds discipline

As both a writer and a data designer, I think writing and designing in chunks builds discipline. You have to prioritize. You have to write or show what matters. You have to always think about your audience’s needs and your responsibility to convey the right thing in a short format. That need is heightened because you don’t have the luxury of space or length. You have to know when to stop.

It’s harder to repurpose “chunks” if you don’t think about it upfront, but it’s possible. Here are two ways:

Plan it out that way

One way is to plan it out over time. Think about the longer narrative, split it up into standalone, succinct sections, release it over a period of time, then wrap it up with broader narrative that allows you to create “new” content that links back to your old stuff, providing value and framing for the reader. Write and illustrate those chunks as pieces of content that stand on their own, and keeping them short is important. This makes it easier to repurpose them later. Not all stories lend themselves to this approach. But even for longer-form content, this is a nice way to tease out the central messages if you’re trying to reach a broader audience.

Or… mine old content for new ideas

If you haven’t done that, it’s always possible (and a good idea) to look over what you’ve published over time and see if there are any natural patterns that allow themselves to be threaded together into new content. This might be harder if it wasn’t part of your original approach. But if you’re struggling to produce value content, it’s a good idea to try.

And keep in mind that this stuff isn’t low-value content that you’re pushing out for the sake of traffic. Presumably, you thought it was relevant enough to publish the first time. Repurposing older content with a new, more current context (tied to something in the news cycle, for example) can be a good deal for your readers.

New isn’t always better. Relevant context is.

“New” isn’t always better. Sometimes packaging it up differently and concisely is a great way to get people to find something they may not have read the first time, and gives you the ability to publish “new” content. Or it gives you the ability to build a rockin’ alien observation tower with a parking garage.

Case study: creating a 50-state data visualization on elections administration

Ever wonder how well states are running their elections systems? Want to know which state rejects the highest number of absentee ballots? Or which state has the lowest voting time? And which state has the highest rate of disability- or illness-related voting problems?

A new interactive elections tool by The Pew Charitable Trusts (the Elections Performance Index) sheds some light on many of the issues that affect how well states administer the process of ensuring that their citizens have the ability to vote and to have those votes counted. Measuring these and other indicators (17 in all, count ‘em), Pew’s elections geeks (I was a part of the team) partnered with Pitch Interactive to develop a first-of-its-kind-tool to see how states fare. Today’s post is a quick take on how the project was created from a data visualization perspective.

Pew Election Performance Index interactive

Pew’s latest elections interactive: The Elections Performance Index

Lots of data here, folks. 50 states (and the District), two elections (2008 presidential and 2010 mid-term) and 17 ways to measure performance. Add to that the ability to allow viewers to make their own judgments–there is an overall score, for sure–but the beauty of this tool is that it allows users to slice and dice the data along some or all indicators, years and states to create custom views and rankings of the data.

You might already know about Pitch Interactive. They’re the developers who created the remarkably cool and techy interactive that tracks government-sponsored weapons/ammunition transactions for Google’s Chrome workshop (view this in Chrome) as well as static graphics like Popular Science’s Evolution of Innovation and Wired’s 24 hours of 311 calls in New York.

The data will dictate your approach to a good visualization

When we sat down with Pitch to kick around ideas for the elections interactive, we were initially inspired by Moritz Stefaner’s very elegant Your Better Life visualization, a tool that measures 11 indicators of quality of life in the 30-plus member countries of the Organization for Economic Cooperation and Development (OECD). Take a look–it’s a beautiful representation of data.

And though, initially, we thought that our interactive might go in the same direction, a deeper dive into the data proved otherwise. Comparing 30 countries along 11 indicators is very different than comparing 50 states plus DC, 17 indicators and 2 election cycles. Add to that the moving target of creating an algorithm to calculate indicators for different user-selected combinations, and you’ve got yourself a project.

After our interactive was live, I talked to Wesley Grubbs (founder and creative director at Pitch) about the project. I was interested in hearing about the hurdles that the data and design presented and how his creativity was challenged when working with the elections data. One of the first things that he recalled was the sheer quantity of data, and the complications of measuring indicators along very different election cycles. If this sounds too wonky, bear with me. Remember, one of the cool things about this interactive is that it allows you to see voter patterns (e.g., voter turnout) along two very different types of elections–mid-term elections (when many states elect their governors, their members of Congress and, in many cases, municipal elections) and the higher-profile presidential elections. Pitting these two against one another is a bit like comparing the proverbial apples and oranges. Voting patterns are dramatically different. (The highest rate of voter turnout in 2008–a presidential election–was 78.1 % in Minnesota. Compare that to the highest rate in the 2010 midterm election–56% for Maine, and you’ll see what I mean.)

Your audiences will influence your design

Another challenge early on was the tension between artistry and function. In an ideal world, the most beautiful thing is the most clear thing (an earlier post, “Should graphics be easy to understand?“, delves into this further). I remember reviewing the awesomeness behind Wes and his team’s early representations of the data. From my perspective as a designer, these were breathtakingly visual concepts that, to those who hung in there, served up beauty as well as clarity. But from a more pragmatic perspective, an analysis of our audience (policymakers and key influencers as well as members of the media and state election administration officials) revealed that the comfort-level with more abstract forms of visualizations was bound to be a mixed bag. Above all else, we needed to be clear and straightforward, getting to the data as quickly as possible.

Wes decided to do just that. “It’s funny,” he said. “We don’t often use bar graphs in our work. But in this case we asked, what’s the most basic way to do rankings? And we realized, it’s simple. You put things on top of one another. So what’s more basic than a bar chart?”

“We had to build trust–you can’t show sparkle balls flying across the screen to impress [your users]–you have to impress them with the data.”–Wesley Grubbs, Pitch Interactive

When I asked Wes how, at the time, he had felt about possibly letting go of some of the crazy creativity that led him to create the Google weapons/ammunitions graphic, he simply responded, “Well, yes, we do lots of cutting edge, wild and crazy stuff. In this case, however, a good developer is going to go where the data leads them. In addition, the audiences for this tool are journalists, academics, media–the range of tech-saavyness is very broad. We had to build trust–you can’t show sparkle balls flying across the screen to impress them–you have to impress them with the data.”

Turn your challenges into an asset

When we brought up the oft-cited concern around vertical space (“How long do you expect people to scroll for 50 states, Wes?”, I remember asking) his approach was straightforward: “Let’s blow up the bar chart and make it an intentional use of vertical space. Let’s make the user scroll–build that into the design instead of trying to cram everything above the fold.”

I think it worked. This is a terrific example of visualization experts who, responsibly, put the data and the end users above all else. “We could have wound up with a beautiful visualization that only some of our audiences understood,” says Wes. “We opted to design something accessible to everyone.”

How did Pitch build the Elections Performance Index tool?

Primarily using D3, a javascript library that many developers are now using for visualizations. It was not without its drawbacks, however. When I asked Wes about lessons learned, the first thing that he mentioned was the importance of understanding the impact of end-user technology on different programming languages. “D3 isn’t for everyone,” he notes. “Take a look at your users. What browsers are they using? The older stuff simply won’t work with many of the best tools of today. You have to scale back expectations at the beginning. The hardest part can be convincing organizations that the cutting-edge stuff requires modern technology and their users may not be in line with that. It’s all about the end user.”

Well, as an end user and a participant in the process I’m pleased. I hope you’ll agree to take the tool for a spin.

The “art” of compromise: Is there room for compromise in designing data graphics?

In my last post, I discussed how expectations and perceptions of designers are as important to quality data visualizations as are more conventional resources, such as time, people and money. But there is also a flip side to this–there are times when, as designers, we may be faced with a choice to compromise on how we present data. The compromises we agree to–or reject–are as important to our field as anything else. (Kudos to me for resisting the urge to title this “drawing the line in infographics.”)

A friend related to me a recent conversation in which an art director who, when presented with a bar graph of extreme values (very high and very low), asked the designer to “fudge” the size of the smaller bars. (They were visible–not hairline–but too small to comfortably fit the values inside of them. Presumably the art director wanted to nudge them up so that the numbers would fit inside of the bars.) My initial reaction was er… not favorable. I felt like a mother bear protecting her cubs (the cubs, in this tortured analogy, are the data). I may have uttered a few choice words, even.

The ethics of compromise.

But, once I calmed myself down, it occurred to me that this might be something interesting to write about. I polled a few designer and non-designer friends. What do you think, I asked. Was this a bow to art or clarity? Was it an unintentional breach of ethics or a well-intentioned attempt to make information easier to understand? Was it goal-driven or just lack of creativity? Don’t jump on the art director just yet. This isn’t about the choice that person made (that’s the subject of another post). It simply reflects the reality that, as in other professions, we’ll all be asked to make choices that, to others, may appear to be inconsequential. We need to make sure we handle these choices intentionally and carefully.

Here’s what came to mind after my conversations with other designers.

Book-binding: an invisible art

Let’s think about the book-binding trade of back in the day. The men (mostly men, anyway) who bound books hundreds of years ago were tradesmen. They had a craft which they revered. They apprenticed and, as journeymen, they traveled from place to place, learning and honing their craft to become–eventually–book-binders. This is not unlike the path that many information designers take today.

For all the painstaking zeal and meticulousness put into the binding of the book, the end result was rarely if ever examined once produced. If the thing didn’t fall apart in your hands, you were satisfied as a consumer.

I won’t bore you with the mechanics, but suffice to say that binding a book involved a lot of work, much of which was invisible to the eventual and subsequent owners. Once purchased, the book was read, perhaps the craftsmanship briefly admired, and then it was shelved or passed on, sometimes for generations (think of the family Bible). And yet, for all the painstaking zeal and meticulousness put into the binding of the book, the end result was rarely if ever examined once produced. Again, not unlike the process of visualizing data, much of the effort and care involved in sewing pages into folios, hand-stitching the spine–remained largely unseen. If the thing didn’t fall apart in your hands you were satisfied. End of story. And yet, despite this invisibility, these bookbinders pursued their craft with diligence and and care nonetheless. How well or how poorly they plied their trade was not immediately evident, as these old books often outlived their makers. They had no immediate incentive to be unduly diligent. And yet, I like to think that most of them did not cut corners. Why? I’d say it was self-respect and public recognition of the importance of their craft. Maybe I’m over romanticizing books (I do collect them).

Our craft: Are we short-order cooks or visual content experts?

My point? This is an issue of the ethics of our craft. As designers, we need to ask ourselves: are we short-order cooks or visual content experts? Are we hacks or tradesmen/women? Is data visualization a craft or only a paycheck? Is data an obstacle to be overcome or a living boundary that, with each challenge, offers us the opportunity to learn, do better, and to empower our readers by bringing information to the surface in a manner that brings with it a new understanding? And while, from the perspective of the client (or, in this case, the art director) it may not always be apparent that the accommodations they ask us to make are not wise, it is–nonetheless–our responsibility to do the right thing, and bring others along. In this way, we advance the field forward and our professionalism as well.

And that’s the crux of this post.

Whatever your intentions, what is the effect of the small compromises that you make in being precise, transparent and correct in how you present data?

The more seasoned amongst you may shake your heads and think that these things are self-evident. But to those of you who are just starting out (be it as younger designers or managers in charge of new data viz projects), this may not be something you’ve thought much about. It may not even seem like much of a big deal to you.

Making those small compromises weighs on you, wears you down and–worse–makes the next compromise all the greater in scope and easier to bear.

What is the effect of compromise on the designer and the team?

So, what happens when a designer makes those compromises? When I asked a few designers, they all had one response in common: morale and self-esteem. Here’s the thing: making that one small edit will be invisible to everyone but you. It’s not like your readers will ask you to send them your Illustrator file so that they can measure pixels before they read further. Like the bookbinder who sewed thread onto page folios, no one but you will see the guts of your files. But making those small compromises weighs on you, wears you down and–worse–makes the next compromise all the greater in scope and easier to bear. And these things add up to the slow devolution of what was once a craftsman/woman (if I may be allowed to use such an archaic term) to a hack.

And what happens when an art director suggests those compromises? Well, you risk losing the respect of seasoned members of your team, that’s obvious. Worse, you risk creating an environment that is progressively sloppy. And while no one will catch the small compromises, they sure as hell will catch the big ones. Remember the infamous Fox piechart?

Other examples of altering data

It doesn’t stop with information designers, as I’m sure you know. Another designer who Photoshops medical imagery (for example, a CT scan or slides of cancer cells) told me about a doctor who, when preparing images of slides for a research publication, asked the designer to darken some areas to make them more visible (thus allowing him to better make his case). The designer balked–these aren’t just pictures, he told the doctor–they’re data.

And if you want a more mainstream example, how about the furor over the Time cover of OJ Simpson in 1994? Or, more recently (2008), the Hillary Clinton ad which featured then Presidential candidate Obama with arguably darker skin?

What is unacceptable compromise to one might be reasonable accommodation to another.

There may not be room to make the wrong compromises, but there is always opportunity for discussion.

No one is perfect. And each of the examples that I gave leave plenty of room for discussion. As a newspaper friend recently noted, some photographers are adamant about not retouching any photos they take–including not cutting subjects out of backgrounds. Others are not as rigid. And not all of the participants in my informal poll reacted with extreme horror at the thought of slightly lengthening bars. Some merely grimaced. But all agreed that if you’re going to tread on thin ice, you’d best aware of it. Another friend points out that he noticed a disconnect between his former employer (a newspaper) and his current one (a corporation). He’s doing the same work–designing information graphics. But whilst former journalist colleagues (having their own code of ethics) would never have asked him to fudge the appearance of data, he feels that–in his current role as a designer in the corporate world–his colleagues have a lesser understanding and appreciation of what asking this might mean.

This isn’t necessarily a bad thing–handled correctly, it can present an opportunity for education. But you have to be willing to put yourself out there–a place that not everyone (perhaps less experienced designers or as employees with less seniority) is comfortable occupying.

As designers, let us be keenly aware of how the small choices we make for ourselves can add up to large consequences for our profession. I’d love to hear more from you on this. Have you been place in similar situations? How did you handle them?

Infographics: Does time equal quality?

Does time equal quality in good infographics? Nope, not necessarily. I’ve been giving this a lot of thought lately and, in reading recent posts by Seth Godin and Alberto Cairo, it’s interesting to see how each touches upon what I see as the pressures and attitudes that affect how well we design good information graphics.

In Mr. Godin’s case, he mentions what he calls “the attention paradox.” While he’s not specifically writing about design, his comments nonetheless aptly relate to the work designers do. As more marketers crave attention, the more they’re willing to part with content that is good at reaching an audience, and terrible at retaining it. Makes sense, right? In a time in which we’re increasingly consumed with tracking metrics and measuring success by the numbers it is par for the course to get caught up in the rat race for the next big thing (big being determined by 30-second relevance and traffic for that day). Surprisingly, information graphics are no exception. And why should they be?

I recently mentioned that, because we’re all under pressure to create more and more content, “repurposing” content is seen as a good way to take advantage of the sweat equity put into other pieces (web articles, reports, data collection) and to convert that into an infographic. This pressure to produce can have real drawbacks–clients mistakenly assume that information can be quickly “designed” just because in their estimation, the facts and the message have already been proscribed. Here–quality can suffer from lack of time. But the point that I was really getting at in my post, which I unfortunately failed to articulate clearly–was the designer’s role.

When designers are treated as service desks and not content experts (“Here are the facts, here is the message, now please make this pretty. Call me when you’re done.”), you simply don’t get the best work.

Fortunately, Alberto Cairo, in “Empower your infographics, visualization, and data teams” gets to the point. According to Mr. Cairo (and I agree) the real problem is the limited perception of the designer’s role. He mentions how, in news rooms, graphic designers are often seen as “service desks.” This isn’t limited to news rooms. In my own life, I occasionally get requests to design graphics “you know, like the New York Times” (yes, I really do). As Mr. Cairo points out, we all laud the New York Times and other large media outlets (one of my personal favorites is New Scientist) for their high-quality information graphics–pieces that can take months to make with large teams of content producers and designers in place. I agree with Mr. Cairo’s perspective that this fact might lead you to erroneously conclude that time and staffing (more people, more time) equals great work (bluntly, he says, “You can’t.”).

The solution lies, in part, in treating and using your designers as partners who help to shape content effectively.

So, what does this mean, exactly? Bring your designer into the room when you’re having editorial discussions about how to create content, before you’ve decided on what shape that content will take. Listen to your designers and expect them to offer up ideas about how to turn that into information design (be it static, motion or interactive).

Designers should read the content.

Expect your designer to read, read, read and understand. I ask my designers to read research reports before they create infographics or data visualizations. This may be a “duh” moment to some of you, but you’d be surprised how many people (including designers) don’t think of this or, worse, don’t see this as part of the designer’s role. How do you design what you don’t understand? How do you filter out the best parts of information and data without having reviewed the source?

And don’t micromanage the design. Leave them alone to create and use their expertise. Trust them, as content partners, to visualize not just the data, not just the facts, but the voice that carries the design.

I’m sure there’s more and would love to hear from you about what other recommendations you have.

Building good infographics part 1: Just because you can say it doesn’t mean you can show it.

Every few months, I receive a call or an e-mail asking me the same thing: I want to set up an inhouse infographics team/process that spits out all the cool data we have sitting around on the cutting room floor. My response is usually the same: grab a cup of coffee, sit back, and be prepared to walk away with more questions than answers. Inevitably, at the end of an hour-long conversation, I hang up the phone and walk away thinking–”oh, I wish I’d said that.” So, this post is prompted by all the things I wish I had said, and all the things I wish I had known as I was starting out. Apparently I missed a lot, because this article is divided into three separate posts:

After all that work, there’s gotta be a good infographic in there somewhere.

Often there are misperceptions about how “easy” infographics are to create–they’re often thought of as a quick way to piece together data from long reports or collections of data that “seem” interesting or “seem” to drive home a specific point.

Many times, the client, marketing or communications perspective is derived (understandably) from a key message that the client wishes to drive home. That’s understandable… we tweeted it, we facebooked it, we posted on google+… after all that work, there’s gotta be a good infographic in there somewhere.

As I’m sure you know, the devil is in the details and there’s no better place for him to stir up confusion than between a team of eager communicators and designers. To many, data visualization may seem like a relatively new type of product (the marriage of data, writing and technology), but the way to wrangle it is the old fashioned way–good communication. Much of this article is about just that.

Are you equipped with the resources to produce and/or manage data visualizations? Do your expectations realistically align with your resources? Before you embark on your project, ask yourself this: what does information design mean to your team?

Do you have writers and editors who understand how differently people consume static or interactive graphics? Do you have someone who can understand the data? Do you have a designer experienced enough to use best practices (no pie-charts showing 12 categories, please) when visualizing data and can push back when necessary? If not, do you have someone who is seasoned enough to be able to guide the designer effectively? As a designer, I can tell you that I’m embarrassed by my early (and ongoing) missteps as an information designer. And (to me) most important of all, do you have a good track record working as a team with your designers? A good track record isn’t as subjective as it seems. Can you build a good visual product? Are your stakeholders satisfied? Are your designers empowered to do their best work? Is your writing and design process flexible enough to be iterative but firm enough to avoid design by committee? The answers to those questions for past products can point to your success with new ones, such as information graphics, even if you haven’t yet been creating them as a team.

But we all have to start somewhere, and this post is as good a place as any.

Background before starting the process.

First, a bit of background for you before you bring your team together.

Audience, goals and outcome. Be aware of how your audience can (and will) shift as your graphic passes through different distribution channels (social media, blogs, more traditional marketing streams, etc.).

One of the first missteps (you’ll hear me mention this often) is to assume that the infographic is simply a larger or longer version of something you’ve already produced. It’s not. Who is the consumer of this piece? What types of graphics (interactive or static) do you think they typically read and pass on? Is there any content or style those graphics share? How does that information affect the tone and style of what you’d like to design (e.g., do you want to stand out or blend in)?

If you’re developing a graphic to promote a product, ask yourself how the consumers of your graphic may be different than those of the related products which you are promoting. For example, if you create a video to persuade engineers to buy your widget, you may consider your target audience to be engineers. But if you create that video and an infographic to promote it, and one or both go viral (blogs, Facebook, etc.), your audience has broadened–and changed. So should your approach.

Data visualization: What you want to say is not always what you’re able to show.

But knowing your message and understanding how it will change for different mediums won’t help if your team assumes that you can easily “lift” some core headlines and “repurpose” a subset of the data into a new graphic.

What works for the goose doesn’t always work for the gander–and visualizing data is no exception. A long piece of content (say, a web article) can have the luxury of nuance and a more complex message. Carrying that into an infographic can be impossible. What sounds compelling in 500 words can take on an entirely new meaning when boiled down to a few headlines. When a series of graphs are woven together to support a key message in a longer piece of content, they do just that–support the message together. But in an infographic, where often neither the attention span nor the space is there (and with a potentially different audience) you necessarily need to pare down both your data and your story. And when you do that, sometimes you find that the two don’t complement each other as well as they did in other products.

And sometimes the answer is no. This information does not make a good infographic. There’s no magic to this discovery–it can happen in the beginning or later in the process. But one of the things that I like to do is to use it as a checkpoint at each major step or whenever I hit a roadblock–why is this happening?

Did we hit a roadblock that can be solved (people, process, content, data or design)? Or is does this idea simply not support an info graphic/interactive?

 

One of the smartest things you can do is to approach messaging and data as a new animal that must be reconceptualized from the beginning, and not make assumptions about its feasibility.

This helps you avoid assumptions that will lead you and your designers down the wrong path–affecting your deadlines, your creativity, your product and your stress level.

So, steps to designing an infographic or interactive? Start at the beginning.

1.      Rethink your audience, your message and confirm that your data supports it. Do your research–what are your competitors doing and who are they reaching out to? How do their infographics differ from their other pieces of content? You don’t have to copy your competitors, but you can learn from them.

2.      Next, be prepared to invite the team to a kick-off discussion to settle on audience, purpose and expectations.

3.      Review your data and make sure it gels with the content. Confirm that the graphic or interactive is feasible. Start working with your designer to make initial sketches of the graphic, and to begin determining format (static, interactive, motion, etc.). More on all of this later.

4.      Pitch the concept with specifics.

5.      Iterate, iterate, iterate.

6.      Begin design and execute the design, editing cycle. Publish.

7.      Learn from your mistakes.

Part 1: Kick-off meeting

Bring donuts, coffee, and call a meeting. Get your designers, editors, writers, researchers, and marketers at the table (try to keep this lean, but not so lean that major influencers will be left out). If you’re working with a small team, consider yourself fortunate–you’ll likely avoid the pitfalls of the dreaded design by committee syndrome. Regardless, keep reading to learn more about things to consider discussing.

“Should we even do this?” Start by reiterating that the the conversation will explore feasibility first, and that the questions you’ll be exploring will help you determine this.

Reality check: Should we even do this? Depending on the dynamics and size of your team, this is something you can tease out gently, or something you can start with upfront. It can be the hardest thing to say, because sometimes all or most of the people involved assume an infographic is a done deal. They’re simply waiting for you to tell them how to get it done.

What are you creating, for whom and why. Discuss what you’d like to create, who it’s for, how you expect they’ll use it, what they’ll likely want to hear (not what *you* want to tell them). A colleague once shared this with me (she uses it for her students) and I’ve been using it ever since:

(X product/project ) is an (X description of project) that provides (X What) for (X audience) in order to (X value proposition).

Talk a bit about the graphic’s relationship to other products (e.g., this is part of a package of [x]) and how the graphic will support that.

Discuss how the message, tone and style of the graphic are different (if at all) from other products, while at the same time reassuring stakeholders by bringing back those differences in support of the overall product package or message. If the audience, their needs and expectations (how they consume information through a graphic, how quickly people read and share on Facebook, etc.) are different from those of other products produced by the team, note how the graphic will addresses those needs.  This (for me anyway) is a reliable way to acknowledge biases and preconceived notions while gently opening up the sky for more possibilities.

Once stakeholders get excited about a graphic, it can be easy for editors, writers and reviewers to get carried away with wordsmithing and micro-managing the designer (this is known as “design by committee”) and many designers would rather draw on hot coals than endure it. I’m kind of in the middle. You can’t always avoid it, but the more experienced you are the better you can side-step some of the pain.

Design by committee: If the stars align and you manage to hold on to your sense of humor and faith in the human race, you can turn the good intentions of micro-managers into useful feedback that is redirected to the appropriate stage of the design cycle.

Hopefully this article will help you avoid some of those pitfalls. As a designer who does a lot of hands on design and as a manager who manages other designers and consultants, I’ve experienced this from many angles. Though it’s not easy no matter where you sit, the better you handle expectations up front about reviewers, roles and design styles the more your designer will be free to add value and expertise to the process.

Avoiding design by committee: questions to ask regarding review and feedback.

If you think, for whatever reason, that your team thinks they can design your product better than your designer, you’re in for a world of hurt unless you begin managing the process at the outset.

The designer’s role. Who is your designer? Do you trust them? Do you feel that they understand you, your message, your brand and your audience? Do you really, really see them as adding value and expertise that you don’t have or, in your heart of hearts, do you secretly think: dang, if I knew how to use that funky Illustrator software, I could bang this out in an hour. I’m being serious here, folks. You really do need to assess the designer’s role and your perception of their skill set (and confirm your team feels similarly) because that’s where roles can break down. Things are the way they are. But if you think, for whatever reason, that your team thinks they can design your product better than your designer, you’re in for a world of hurt unless you begin managing the process at the outset. Good communication, respect for each person’s expertise and understanding of roles does wonders to establish trust. Work hard to get there and you won’t be sorry.

An informed designer is a good designer.

And don’t forget to to ensure that your designer is part of the larger conversations about the direction of the graphic–the more they know and hear at the outset, the better equipped they’ll be to do their best work when their time comes to design. And giving them multiple opportunities to learn the message, the data and ask questions will pay off in the end–a good designer is an informed designer.

Process and team roles. Who will be reviewing the graphic? Will they be sharing it with other teams or people? I can’t tell you how many times I thought I had a design nailed down when one of my reviewers comes back to me with more edits or comments (often good ones) because they share it with a (friend/manager/colleague) who wasn’t part of the process. Don’t get too grumpy about this–sometimes the outsider perspective can be invaluable–but ensure that it has a time and a place and that you’re aware of it. In other words, corral it up front.

What will each team member’s role be? A long time ago someone taught me the RACI model (Responsible, Accountable, Consult, Inform). Since then, I’ve used this concept to map out the (sometimes) difficult task of determining the various roles that reviewers and influencers have in a project’s life cycle (typically a content outline, a few sketches, a few design drafts and one or two final versions if you do it right).

Try to determine how much influence and decision-making authority each reviewer has, and work to ensure that they’re aware of it. Determine who has approval authority and how you need to work with them through the design cycle. Make sure they understand the big picture. Know up front (go ahead, ask them) if they will be weighing in on specifics (colors, fonts, commas, piecharts). Yeah, I know–that’s design by committee and they shouldn’t do that. But (reality check) sometimes they do and there’s not a damn thing you can do about it. If that’s the case (and hopefully you’re seasoned or lucky enough to know the difference) you’re going to have to do your best to manage it.

One person alone cannot, and should not, review and give feedback on everything. Group review assignments and delegate accordingly.

In my experience, infographic review is comprised of the following, usually iterated/produced in a series of drafts that grow progressively more refined along each of the points below:

  • Big picture stuff: message, tone/style (brand) and claims
  • Visuals: major things (look, brand, fonts) and the details (are things aligned, are the graphics built well)
  • Headlines and subtext: How well do the headlines thread together? Are they coherent? Does your supporting text (“chatter”) read well? Are graphic headlines clear enough so that if the reader doesn’t look at the graphic, they understand the major findings?
  • Quality control: Commas, spelling, fact checking

Which team members review content? Findings? Design comps? Who handles fact checking or quality control? Who sees every draft versus major drafts? Plan for it and work as closely with them as needed. Give them touchpoints for approvals. Others you’ll simply consult (hey, what do you think of this?). They may have opinions, but you’re not bound to to abide by those–simply to consider them. And others you simply inform. They need to be told (timing milestones, etc.) but aren’t there to weigh in on design. Trust me if you don’t already know this. Hone your diplomacy skills and try to set these expectations up front.

Design and timing. Next, talk about design and timing expectations for the graphic. Discuss products related to the graphic and how (or whether) the graphic should visually tied in to those. Make it clear that a literal one-to-one match in terms of colors, fonts and styles (depending on the product) is not always wise. Everything depends on the medium. For example, print uses different fonts and space/composition differently than online content. And interactives are designed differently than infographics.

And here’s another opportunity to set expectations up front. You want to discuss, at this point, the overall tone, look and feel (e.g., it should tie in to [x] brand/product, etc.) just enough so that later on, when the team is presented with a design, there are new things to show them but no major surprises (um, since when did we start using Comic Sans and magenta as a brand color?).

Build a rough schedule of the milestones. Allow for 1-3 rough drafts (sketches) and 2-3 (sometimes more) design drafts.

Think of the design cycle as an inverted pyramid. In the beginning, the number of reviewers will be many, as will the scope and quantity of edits and changes. In the end, it will be the reverse. Fewer (and more senior) reviewers and fewer changes that are smaller in scope. At the very end, you should have the designer and one person worrying about errant commas and moving a line or two by a few pixels. That’s about it.

Think of the design cycle as an inverted pyramid. In the beginning, the number of reviewers will be many, as will the scope and quantity of edits and changes. That’s okay, because you’ll be working with a document meant to accommodate this–content outlines and rough sketches. This is exactly where changes should occur–where the level of effort to make them will be the lowest.

I can’t say this enough. I wish I had invented the concept:

As you move forward with more detailed sketches and, later, illustrated design concepts, the level of effort to make changes will be higher. Thus, the number of people reviewing should be smaller (and perhaps more senior in the decision-making process) and the amount of edits and changes–as well as their scope–should be smaller.

Know when your data will be final, and plan accordingly. I can’t underscore this enough. Changing data can do just that–change. Everything. Your words, your scope, your design. Your sanity. Remember that awesome headline or tagline that rocked your world? Don’t get too attached to it if your data has changed. It’s a no-brainer, I know. Even though I’ve been doing this for a while, I can’t tell you how many times I get so excited by the design that I simply forget to nag the team about the data, only to learn that it has changed. And with it, the design concept that I was working so hard on. Life happens.

Plan for it and check in frequently with your team if you think this will be a possibility. For example, say you’re working on a story about widget use around the world. You review your data–awesome. Widget use is skyrocketing, according to data that you pulled for the past ten years. And last year’s data is coming out next week. So, in anticipation, you pull the team together, work up some sketches, move forward with designs, and leave a simple placeholder for 2012 data that you know is coming. Then you receive the data and–widget use has leveled off. Why? Well, not only does that require some explaining, but it also changes your story somewhat. You can prevent much of this from happening by talking, up front, about what the data is and, if you’re expecting more, getting good intelligence on what those numbers are likely to show. If you work in a company where numbers are your bread and butter–likely you’ll be surrounded by professionals who already know this. But if you’re embarking on this for the first time, keep that in mind.

Time to move on to the second article, where you’ll learn how to bring together your data and your story into a solid sketch that you can later present.

A little bit of visual awesomeness from Visual.ly

On a weekly basis (if I’m lucky) one of the things that I find myself most in need of is a common area to find real-life examples of the best practices that we all try to follow. But talk is cheap and a little bit of visual awesomeness goes a long way so…

When Visual.ly announced its launch of a new social media platform for data viz designers, I danced my happy dance (perhaps prematurely, time will tell).

visual.ly - new social media platform for data viz

Why? Well, I don’t know how many of you often find yourselves swimming upstream and in the dark when it comes to sweet-talking clients out of ideas that you know are, em, well, sometimes just a wee bit unusual, not realistic, not good practice, a few branches short of a tree etc., etc.. If you are, then you also know how, though these conversations can sometimes be rewarding, oftentimes they are not (all recipients of puzzled looks or polite silence followed by the inevitable request to “do it anyway” or “can’t you just…” raise your hands).

I’m hoping that this new platform will give us quick access to quality examples of information design–solutions that illustrate a specific direction or idea that we’re trying to pitch to our teams, stakeholders and clients. Often I find myself scrambling to create comps to better prove or show a point. Nothing wrong with that, but if there’s a place where I can follow knowledgeable designers and their work rather than wading through Google searches or sites that warehouse images, I’m all for it (though where would I be without my favorite beer graphic?).

The Visual.ly social media platform, coupled with the excellent blogs out there (ranging from good critiques on the visual.ly blog, to case studies and reality checks by chartsnthings, as well as the usual suspects like the Guardian and Flowing Data and many more) is a damn good thing, and I’m excited to see this take off.

If we use this tool wisely and well, does that mean no more animated 3D piecharts?