Mind the gap: Advocacy journalism fills a void

By now, it’s safe to say that the digital ecosystem is shaking things up for journalists. Traditional journalists are turning into brands (Ezra Klein, Vox and Nate Silver, 538, to name a few). Journalists are getting paid for clicks. Social media tools are creating a new breed of reporting through conflict journalism and citizen journalists—coverage that bleeds into news reporting and advocacy. And mission-driven social media sites (like Upworthy) are partnering with advocacy organizations to create serious, in-depth original content, as the Nieman Journalism Lab reported last month. Phew.

Mind the gap: Advocacy journalism fills a void

And now advocacy organizations are getting into the mix. They’re taking the reins by exposing, researching and writing about the issues they care about in a genre of journalism known as advocacy journalism. Advocacy journalism has been around for a while (remember muckraking?). But today’s digital landscape seems ripe for innovation by those that want to take the genre further.

A recent article by Dan Gillmor in Slate’s Future tense project provides a thought-provoking and current look at the nexus of advocacy and journalism today, one that made me want to dig a little deeper into the subject to see where the field is at and what hurdles it faces.

Advocacy journalism is an interesting genre. On the one hand, it seems like a big deal—by injecting a point of view, it appears, at first blush, to upend the sacrosanct “objective reporting” model that is the foundation of traditional journalism. But in fact, today’s so-called traditional journalism is itself rife with points of view (reporters are human, after all, and routinely bring their personal perspectives to the questions they ask and the subjects they cover).

It’s no coincidence that, at the same time as advocacy journalism is getting more attention, investigative reporting in traditional media—the bread and butter of deep, immersive journalism—is diminishing due to shrinking newsroom budgets, capacity, and interest. (The American Journalism Review wrote about it in 2010, and things don’t look that much rosier if you read about revenues and ad dollars in Pew Research Center’s State of the News Media, 2014 report or the internet marketing firm Vocus’s 2014 State of the Media.)

So, resources for investigative reporting in traditional media may be diminishing, but the need itself certainly hasn’t. The immediacy of the internet and social media reporting make the gaps left by traditional news organizations more transparent than ever before. It has opened up the playing field for those who want, and need, to write about social change, and see advocacy journalism as yet another tool for driving that change. It is here that advocacy organizations are stepping in.

Gillmor mentions the Upworthy partnership with Human Rights Watch, Climate Nexus and ProPublica, but he also reminds us of the work of the libertarian Cato Institute, and the ACLU, noting that these organizations are not just writing about their issues—they have invested in hiring talented, investigative journalists to do the work.

One of my earlier posts this year discusses how advocacy organizations are harnessing social media to effect social change on their own terms (I wrote about the MIT Center for Civic Media’s study of the media coverage of the Travyon Martin tragedy, and of how it was framed and defined in part by digital-savvy advocacy organizations). In the same way, advocacy organizations are equipping themselves with investigative journalists to define the things that need fixing in our society, again, on their own terms.

Transparency and bias concerns apply to all reporting, not just advocacy journalism

As with any form of journalism (see a post that I wrote about the importance of trusted messengers correctly reporting the facts), there are always legitimate concerns around the ability of the “reporter” to be transparent about the perspective and bias that he or she brings to a story, especially when money comes into the picture (for example, a journalist embedded in an advocacy organization writing about an issue that is driven by a funder). But one can easily make the argument that journalism has never been immune to this predicament. Media brands are, after all, owned by corporations—remember Michael Bloomberg’s takeover of his newsroom and Murdoch’s editorial biases? The issue is not so much that money is paying for journalism (it always has). Rather, the issue is one of transparency and fairness (something Gillmor acknowledges in his online book, Mediactive).

Most recently, advocacy journalism was roundly dismissed by Buzzfeed’s Editor-in-Chief Ben Smith. When Eric Hippeau, a venture capitalist (and early investor in Buzzfeed), sat on a panel at the Columbia School of Journalism and asked Smith about the fine line between different forms of journalism and advocacy, Smith responded, “Um, yeah, I hate advocacy. Partly because I think, you know, telling people to be outraged about something is the least useful thing in the world.” (The video is here and and a good article with more on Buzzfeed is here.)

That’s kind of ironic given Buzzfeed’s public missteps and its association with the Koch brothers on the issue of immigration reform. I’m not saying that the partnership is in and of itself a concern (Slate’s Dave Weigel described it as a “pro-immigration reform” panel that was very much in keeping with the Koch brothers’ longstanding interest in the issue). But the association is not one to be ignored, either, particularly from a man who claims to hate advocacy. I’m still coming around to the idea that “Buzzfeed” and “journalism” can be conjoined. I don’t say that to be snarky—I say that to mean that all lines are blurring, including newstainment sites like Buzzfeed that are reinventing themselves in the digital journalism mold, whatever that is.

Medialens has a good take on the back-and-forth skepticism around advocacy journalism (“All Journalism is ‘Advocacy Journalism’ ”) and offers some clear-eyed perspective by pointing to numerous examples of how ‘non’ advocacy journalism exhibits bias (ranging from uber-left Ira Glass’s omission of the U.S. role in Guatemalan genocide to Jeff Bezos’s 2013 purchase of the Washington Post alongside Amazon’s $600 million cloud-computing deal with the CIA—on the heels of its decision to stop hosting WikiLeaks in 2010).

Journalism is changing: traditional media gatekeepers are going away

As Gillmor points out (and as I’ve written written previously), back in the day, traditional media were largely gatekeepers to reporting. If you were an advocate or an organization with a story and a point of view, you had to get a reporter’s interest and rely on that person to pitch it to an editor. To stand the best chance of success, you had to do the research, get the facts straight, frame the narrative, and package it up so that a reporter could understand it, pick it up, and pitch it. Those days are disappearing, and in their place is a new frontier of blurry gray lines of people and perspectives, all vying for a chance to shape the news agenda of the next hour. Investigative reporting is what gives all of us perspective, makes us take a collective deep breath, and think beyond that next hour.

It’s unsettling, but also an opportunity to fill in the gaps left by the old guard, as long we do it right. So, what’s right?

Doing it right: some things should never change

I recall reading (and tweeting) about Upworthy’s announcement when I read Nieman Lab’s post last month. Given that I work in a policy and advocacy organization that has a keen interest in seeing its point of view accurately and widely expressed in the media, I myself wondered how we could inject ourselves into a similar partnership. And, if we could, what we would say, how we would separate our social passion from the hard and complicated truths that spell out complex political realities? For me, it raised more questions than I could answer. But it’s tremendously exciting to see where others are going.

I’ll be curious to see how (or if) these partnerships help fill the void left by the diminished investment in investigative reporting in traditional newsrooms. And I’m also eager to see what new best practices emerge as a result. But regardless of how things change, the responsibility of transparency has never been greater. And all of these changes add up to the same principles that should never, ever change in journalism—report the facts, be clear and transparent about your point of view, and tell people where your money is coming from.

Dataviz is not a one-way street: fostering critical thinkers

Last year, I wrote two posts about the important editorial role that the designer plays in visualizing data (you can read them here and here). This week, Moritz Stefaner did a much more eloquent (and concise) job of underscoring the sensibility and the responsibility of the designer in crafting a data visualization.

But what I found particularly insightful about Mr. Stefaner’s post is his different characterization of what many of us (including me) typically describe as “telling the story” through data. He challenges that oft-used paradigm and, instead, offers a more compelling mode–the participatory, audience-driven cognitive experience that is the true power of data visualization. To me, this is what I found the most compelling, the power that data visualizations have to create a community of critical thinkers.

The story-telling model, according to Mr. Stefaner, is a one-way street that invokes a limiting, linear “start here, end here” dynamic–one that ignores the true opportunities that data visualization presents. Mr. Stefaner’s more aspirational definition has the reader exploring and creating his/her own experiences through the visualization.

In hindsight, it makes so much sense, right? It’s the interactivity of data visualization beyond sorting, filtering, reading and reporting. Rather, it’s a way to respect and foster the intellectual curiosity of the reader, thus fostering a culture of critical thinkers who go behind the more passive consumption of being “told” a story.

Mr. Stefaner tells us that this is his motivation for creating rich, immersive projects, ones that turn his audiences into “fellow travelers,” as he calls them. I absolutely love this characterization, albeit to me, it is not without some challenges. For example, there are times when I find myself lost in a data visualization that is too complex. Rather than stimulate my intellectual curiosity and propel me deeper into that visualization, I find myself frustrated that I’m being left behind by the author/designer, that I’m missing something important. This isn’t what Mr. Stefaner is suggesting, but it’s worth noting nonetheless.

This is one of the best things that I’ve read in a while, and one I’ll remember.

How are Legos like content? Writing in chunks.

As the mother of a 4-year-old, my life is surrounded by Legos. Daily, I watch my son tear down part of his monster lunar lander to quickly repurpose it as an alien observation tower with a parking garage for “the cars of the aliens.” The kid thinks in chunks, seeing his Legos not as specific blocks, but as cross-functional units ready to be quickly repurposed into other “stuff.”

Legos: cross-functional units ready to be quickly repurposed into other “stuff.”

It’s a good way to think about content, too—be it words or data. There’s no way every single one of your readers are going to read everything that you push out. And its likely that the things you write about have staying power well beyond the day you publish them, right?

But the minute you post your content, it is immediately competing against the news feeds of everything that comes after it. The social streams of Facebook, Twitter, and other social media, plus the news feeds of news aggregators have very short memories. People move on.

Chunks: flexibility in writing and repurposing for later

But if you think and write your content in standalone, succinct “chunks,” (going back to the Lego analogy here), you can use these chunks later. You can combine them and thread them together into an overview narrative, like this morning’s “Three Technology Revolutions” story by the Pew Research Center’s Internet Project. Notice how the main narrative in the story consists of just three short sections (a summary paragraph followed by a large data graphic).

What’s interesting about this is not the data itself, but rather how, if you click the links inside each paragraph, you’ll notice that they link to older articles published last year. And each of those articles is largely concentrated on a data graphic with a small amount of text for context.

As you can see, using (and writing) content in “chunks” allows you lots of flexibility as a writer, and even more flexibility in repurposing your content. (Last year I wrote about taking a similar approach to building infographics.) It’s tough to do this.

Builds discipline

As both a writer and a data designer, I think writing and designing in chunks builds discipline. You have to prioritize. You have to write or show what matters. You have to always think about your audience’s needs and your responsibility to convey the right thing in a short format. That need is heightened because you don’t have the luxury of space or length. You have to know when to stop.

It’s harder to repurpose “chunks” if you don’t think about it upfront, but it’s possible. Here are two ways:

Plan it out that way

One way is to plan it out over time. Think about the longer narrative, split it up into standalone, succinct sections, release it over a period of time, then wrap it up with broader narrative that allows you to create “new” content that links back to your old stuff, providing value and framing for the reader. Write and illustrate those chunks as pieces of content that stand on their own, and keeping them short is important. This makes it easier to repurpose them later. Not all stories lend themselves to this approach. But even for longer-form content, this is a nice way to tease out the central messages if you’re trying to reach a broader audience.

Or… mine old content for new ideas

If you haven’t done that, it’s always possible (and a good idea) to look over what you’ve published over time and see if there are any natural patterns that allow themselves to be threaded together into new content. This might be harder if it wasn’t part of your original approach. But if you’re struggling to produce value content, it’s a good idea to try.

And keep in mind that this stuff isn’t low-value content that you’re pushing out for the sake of traffic. Presumably, you thought it was relevant enough to publish the first time. Repurposing older content with a new, more current context (tied to something in the news cycle, for example) can be a good deal for your readers.

New isn’t always better. Relevant context is.

“New” isn’t always better. Sometimes packaging it up differently and concisely is a great way to get people to find something they may not have read the first time, and gives you the ability to publish “new” content. Or it gives you the ability to build a rockin’ alien observation tower with a parking garage.

Too good to question: Using data for good intentions

When I lived in a dodgy part of Washington, D.C. in the early 90s, I used to get my food from either the pizza joint six blocks down the street, or Dottie’s Liquor on the corner of the dilapidated English basement that I called home. My hours were irregular (hey, I was young and having fun), but I could always count on Dottie’s Liquor to furnish more than a six-pack. I could buy high-fat, high-sodium canned concoctions called “soup” for 99 cents, sugary fruit drinks, and the occasional yellowed roll of toilet paper that the elderly African American cashier would silently pull off the dusty top shelf that hung precariously behind the counter. I didn’t care much about my diet—I was a bike messenger—I could burn off anything. And I never noticed the young Latino and African American families that would crowd the aisles (it was a small store, it only took one family to do that), with kids in tow. It never occurred to me that this was their grocery store because back then, there were no other options within walking distance.

As I got older, I began hearing about “food deserts,” pockets in low-income neighborhoods where a paucity of fresh food and vegetables was the norm. And what little quality food there was cost a fortune. The media coverage would typically feature a few quotes from a researcher and perhaps a food advocate, along with a reasonable-sounding statistic in support.

That framing fit neatly into my personal narrative. I found myself in quick agreement when food activists decried the situation. I never questioned the statistics, either. And when policy makers joined with grassroots campaigns to turn advocacy into policy, I supported it with a sense of satisfaction—in my lifetime, things were changing. Move over Dottie’s Liquor. Farmer’s market produce, come on in. And then, earlier this week, an article on Slate claims that food deserts do not exist—that the claims were made based on inaccurate interpretations of various research studies.

The psychology of data

The idea that by introducing healthy, fresh food one could measurably improve poor health outcomes in low-income populations seemed, not too good to be true, but rather too good to question. So, when Slate published their article questioning claims made about the existence of food deserts, I was surprised and disappointed.

And therein lies the psychology of data. When it proves something you agree with, how likely are you to question it? For a lay person, it’s a question of how well-informed we are. For a policy maker, the burden is much higher.

And the challenge we face, no matter how well informed we attempt to be as members of the general public, is that we are hostage to the facts that trusted messengers—among them, policy makers, journalists and advocates—put in front of us. (For a discussion of the designer’s role, read this previous post.) That’s a big responsibility for them, and the responsibility for us is to question them and hold them to it.

More important than debating the merits of whether or not food deserts truly exist, is examining how the claim of food deserts came to be proven and then disputed. It allows us to walk through the evolution of an idea from the ground up (from advocates, to policy makers, and back to us, the public), and understand the role that data and data literacy plays out across the different actors.

And that’s what this post is about.

Let’s take a quick look at the Slate article and a few of the studies that it references. These studies examine food deserts via the lens of health outcomes, diet and the availability and proximity of healthy food. According to Slate, the increase of healthy food initiatives (those aimed at reducing food deserts and thus, disparities in the health outcomes of low-income populations) has risen sharply in the U.S., due to the largely successful efforts of food activists who lobbied for fresh, affordable food in poor neighborhoods to reduce disparities in health outcomes of low-income people. The charge has even been taken up by Michele Obama.

How did food desert initiatives originate?

In Britain in the mid-90s, there were a few studies (note that Slate describes them as “preliminary”) that suggested that a “a link might exist between distance to a grocery store and the diets of poor people.”  Already you can see how easily a well-intentioned health advocate or policy maker can jump to the conclusion that a correlation exists between poor health outcomes and lack of access to fresh, affordable food available from a local grocery store. And this is exactly what happened. The Slate article traces the history of the food desert movement. In a nutshell—a few studies in Britain in the 90s were followed by a Pennsylvania law in 2004 that funded fresh food programmes, followed in quick succession by adoption of similar programs in 22 U.S. states (to date), according to Slate.

But the data cited by advocates in these studies doesn’t entirely support that correlation. Here is a summary of a few studies that refute this (one of which is written by an author who wrote a study that is often misquoted).

A widely-cited study used to support the existence of food deserts is inconclusive

The Journal of the American Medical Association’s (JAMA) 2011 study, “Fast Food Restaurants and Food Stores: Longitudinal Associations With Diet in Young to Middle-aged Adults: The CARDIA Study,” examined 15 years of longitudinal data (repeated observations over a period of time) from a cohort (group) of 18- to 30-year-olds in the U.S.

Researchers analyzed how often individuals ate fast food, how much of it they ate, the quality of their food diet, and how much they ate of fruits and vegetables, as well as the availability of fast food restaurants and supermarket grocery stores (measured at different distances). You can read the study for yourself—but it concluded that the evidence showing a correlation between bad food resources and poor diet and obesity are mixed, at best.

“Neighborhood supermarket and grocery store availability were generally unrelated to diet quality and adherence to fruit and vegetable recommendations, with similar associations across income levels.”

So as you can see, the conclusions from the JAMA study didn’t quite square with how they were being used by policy makers—other factors were at play. Low-income men were more apt to consume nearby fast food more (and, conversely, did have a better diet when there were supermarkets nearby), but low-income women were not statistically significant. Middle-income individuals showed varied significance (described by the researchers as “weak” and “inconsistent with significant counterintuitive associations in high-income respondents”).

Tensions between the aspirations of social change and the reality of evidence-based research

An essay in the Journal of Epidemiol Community Health, “Good intentions and received wisdom are not enough,” features a powerful (and damning) indictment of the touchy dynamic between the pressures of social change and the research that underscores it. From the authors:

“There is a common view amongst social and public health scientists that there is an evidence-based medicine juggernaut, a powerful, naive, and overweening attempt to impose an inappropriate narrow and medical model of experimentation onto a complex social world.”

The essay pointedly calls out the resistance (“hostility”) of social scientists, health policy makers and advocates to attempts by researchers to use the evidence-based approach traditionally used in medicine, but not public policy (systemic reviews of data or experimental designs, for example). Why? The authors of the essay claim that social change advocates view the real world as too messy and a far cry from the controlled environment of academic and medical research. This applies, the authors note, particularly to what I’ll describe as social issues of the day—issues where good intentions and raw emotions are at the surface as well-intentioned advocates and policy makers attempt to use data to alleviate the very real and valid human suffering that is so visible to all of us. Read it here.

Assertions quickly become facts in the public sphere

The introduction to “‘Food deserts’—evidence and assumption in health policy making,” by Steven Cummings and Sally Macintyre (British Medical Journal) is worth quoting word for word:

“Assertions can be reported so often that they are considered true (“factoids”). They may sometimes even be used to determine health policy when empirical information is lacking.”

It’s telling that this was written in 2002, approximately two years before the elimination of food deserts became a part of American public policy.

The paper attempts to track the rise of the food desert assertion in the UK. It points to three main UK studies that were frequently cited by advocates and policy makers (two are noted above) and systematically dismantles what it characterizes as erroneous assertions by advocates to correlate food deserts with poor health outcomes. How? You can read about it for yourself, but here’s one example.

The study found that, though healthier food costs more than unhealthy food in low-income areas, both actually cost less in low-income areas. Advocates, however, routinely cited a study but claimed simply that good food cost more than bad food. The nuance here is an important one, and the authors point out that it was never made.

The authors also discuss a different study that has been cited by advocates that is also not as conclusive as widely reported—the study shows that small grocery stores have more expensive food and a narrower range of options—but doesn’t compare how this plays out by income distribution (low- versus high-income neighborhoods).

Lastly, the authors refer to a 1992 study (also frequently cited) which compared the cost and availability of a basket of healthy versus unhealthy foods in poor and more affluent neighborhoods. The study (ironically, also published by Macintyre) was simply a pilot study and didn’t use random sampling, significance tests, and other statistical methods that a more robust study would have used. It was, after all, only intended to be a pilot study. Macintyre herself points out that it was widely (and wrongly) cited across the UK and America as evidence of food deserts.

I’ll leave you with another quote by Macintyre:

“If the social climate is right, facts about the social world can be assumed and hence used as the basis for health policy in the absence of much empirical information.”

That pretty much sums it up.

In fairness, these studies also raise many questions. Who are the authors, how are they funded, and how legitimate are the claims they themselves make? But the questions posed by the authors of these studies serve to at least merit a closer examination of the relationship between data and policy.

Implications for social change advocates and public policy

What are the implications for those of us who care about social and public policy?

Not being critical thinkers and examiners of data puts our credibility on the line in the arena of public perception. It arms our opponents with legitimate counter-criticism to our views.

It can distract us from other, more viable paths to social change that truly can be substantiated and measured. And it obscures the broader, but as important, good intentions behind our convictions. In this case, for low-income people who disproportionately suffer from poor health outcomes, what are the contributing factors that have been credibly examined (long hours working several jobs, the stress and worry that accompany poverty, or the lack of education about what constitutes good health habits,)? That’s where public policy can be directed.

Valuing proper research, taking the time to understand it, and respecting its limitations strengthens our arguments

It’s tough for me to write this post. I’m Hispanic and I have spent my entire career in the advocacy and public policy field. This is very much my world and I see every day how hard my friends and professional colleagues toil to right the wrongs that society allows. The passion, integrity and commitment that advocates and policy makers bring to their work can not be underestimated. And that’s why I write this, because valuing proper research, taking the time to understand it, and respecting its limitations only makes our positions stronger.

In an earlier post, I wrote about how lack of data literacy can put social change organizations behind the curve in advancing their goals. In this case, it can do the same to good intentions, and good outcomes.

But let me conclude by saying that just because the data may not support the public narrative of food deserts, that doesn’t mean that it’s okay for poor people to eat bad food. That’s a patently unfair situation for those who live in poverty. There are many benefits to eating fresh, affordable fruits and vegetables. I make that assumption from what I read in  mostly reputable news sources. I further assume that avoiding high-fat, low-nutrition food that delivers scant nutrition for the money is good for other reasons. At least, I want to believe that. But as good as that sounds to me, perhaps I should do a little digging to substantiate my convictions.

Mapping data on the influence of traditional and digital media for social change

The shooting death of Trayvon Martin, the 17-year-old black Florida high school student who was killed by George Zimmerman on February 26, 2012, spurred one of the most widely reported, painful and controversial public conversations on race and social justice in recent memory. The story started as a local news piece, and quickly morphed into a national debate in newspapers and radio stations; on YouTube, Twitter, Facebook, Reddit and other social media channels; on front stoops, in office cubicles, and at kitchen tables;  across marches, rallies and demonstrations; and through online petitions and campaigns. These events coalesced and influenced the actions of news organizations, citizens, politicians and thought leaders in a very public way. This offline/online “networked” public discourse was a far cry from the analog (print, radio) news model of the past.

Understanding how information and news networks relate and influence one another helps you decide where to take your message, and to thus influence and help set the agenda for public debate. This is where today’s social change organizations will succeed or fail in their efforts to remain relevant and effective agents of change.

The Battle for Trayvon Martin: Mapping a Media Controversy Online and Off-line” study analyzes, piece by piece, each facet of the intersection between the offline and online reactions, advocacy, citizen journalism and organized media coverage of the Trayvon Martin news event, and presents an analysis that takes us to the very epicenter of the intersection between media coverage, online and offline activism on both a personal and grassroots level, and the results through the lens of public discourse. This pioneering February, 2014 study was authored by Erhardt Graeff, Matt Stempeck and Ethan Zuckerman of the MIT Center for Civic Media. The goal of the study is to analyze the evolution of the Trayvon Martin story and to understand the role that activists in how the story played out across offline and online media channels .

Using data to quantify influence in public discourse

To the best of my knowledge, the authors are doing something that no one has done before for traditional and digital media (the methodology will give you a headache, in a good way)—they attempt to quantify and measure far beyond the “clicks” on articles that many of us traditionally use to measure engagement and, from that, to glean our influence over the message (I know I’m over simplifying but not by much).

Rather, they map the spread and cross-pollination of those ideas across all media (offline and online, traditional and participatory) and make correlations around consumption (who is clicking) and engagement (what they do and share afterwards), tracking it all back to the message (how does all of this effect how analog and digital news outlets cover the issue). It’s a fascinating cycle, and one that any organization interested in shaping public opinion and effecting social change would be better served to learn.

This post attempts to translate the findings of the study into takeaways that organizations who focus on social change can use to better understand the correlation between traditional, digital and social media today.

First, let’s take a look at one of the most helpful parts of the study—an analysis of the journalism ecosystem of today.

Yesterday’s traditional news gatekeepers are gone—replaced by “the networked public sphere.”

To be effective, social change organizations need to understand how to work and communicate in what the study defines as the “ecosystem” of news and information today.

I think of this in broader terms—to me it’s more of an information ecosystem. Regardless, it is not the topdown gatekeeper model from back in the day of print news—the managing editor, the reporter, and you—cultivating a personal relationship with a network of journalists to pitch your story. Don’t get me wrong, that world exists. But it has expanded by so much that if you don’t understand where else others are engaging, you’ll be talking to an empty room, albeit a virtual one.

The study underscores this by helpfully describing the new world of media as an ecosystem rather than an environment. The distinction may be lost on some of us, but the definition the authors present is clear.

“[Today's media ecosystem] is not monolithic or hierarchical—[rather] dynamic networks of media linked together by transmedia audiences [those who hop from one media and social platform to another—my take] coalescing around particular stories at particular times, [following] literal hyperlinks [to seek] the most influential source at a given moment.”

So what comprises digital media today? The study emphasizes both professional content (journalists) and amateur content (“citizen journalist” bloggers, for example). Add to this, everyone—those who write 50-word posts on Facebook that get shared, Tweeted and discussed; 140 characters on Twitter, those who post Instagram photos and opinions; the discussions on Reddit, etc.). The authors describe this as “the networked public sphere.”  And it’s a big universe with lots of moving parts. If you’re trying to control it, give up (that’s yesterday’s model). If you’re trying to be a smart influencer, read on.

The traditional gate-keeper role of the media has been upended by the democratization of information, which gives social change organizations the opportunity to seize and set the agenda of public discourse.

What’s cool about this networked public sphere model, and critical for social change organizations to understand, is that it presents unheralded opportunities for these organizations to actually set the agenda for public discourse. As noted above, the traditional gate-keeper role of the media has been upended (to a degree) by the democratization of information. If social change organizations (and more importantly, the individuals who serve as their advocates and ambassadors) choose to engage in digital media (carrying out conversations and sharing information on Twitter, Facebook; pushing cultivating relationships and content with bloggers, etc.,) their message becomes the news, and they get to frame it.

Use social media effectively and your message becomes the news—you get to frame the debate.

The study references recent media research around the revolution in Egypt (2010), and likens the Trayvon Martin story to that revolution, in terms of how it played out across all media and public dialogue. For example, the authors cite how Twitter’s #egypt hashtag reflected a blend of both personal political expression and a more conventional media push around a central message. To me, the Twitter conversation represents a hybrid of these formerly distant messaging cousins (the individual and the media outlet).

Think of it this way: Twitter users pushed out their own message about the revolution framed in a way that expressed their common sentiment, then the more “authoritative” (traditional) media outlets began reporting on that “framed” message, and that particular framing was—in turn—disseminated even further by the readers of those outlets. This is one way in which social media is influencing how even traditional news media are shaping and forming the message behind a story.

Data tools used to track and analyze coverage

To trace the path, evolution and influence of the Trayvon Martin story, the authors use Media Cloud and Controversy Mapper (which are, by the way, two tools developed by the authors in conjunction with Harvard’s Berkman Center for Internet and Society.) This is good stuff (case in point: Controversy Mapper’s data visualization on SOPA/PIPA)—imagine being able to analyze not just what the media is covering but how (the message, the interpretation, the framing, the influence) in a rigorous, quantitative way. Well, they did that.

Media Cloud collects articles from more than 27,000 mainstream media outlets and blogs, and follows and tracks links mentioned in these sources to explore the coverage even further. Archive.org’s TV News archive helped the authors analyze broadcast TV (they mine transcripts of broadcast TV). On the digital side, the authors also used Google Trends to analyze searches, General Sentiment to track Tweets and hashtags, and the url tracking and shortening tool, bit.ly.

Data that the authors examined

First the broader media coverage: social media and professional news outlets. The authors  examined the number of times the story was referenced, Tweets and hashtags, TV, Google searches for the subject, location of coverage (e.g., front page indicating editorial prioritization) in national papers, and the public’s online activism (for example, a petition on change.org). The methodology and data collection were far more involved than my crude summary attempt. Because the goal of this post is to translate the study for a more general audience, I don’t do the methodology much justice. It merits a closer read.

Trayvon Martin

Let’s shift to the Trayvon Martin story. As you know, this unfolded offline initially—it was local—hyperlocal at first, narrowly framed and, as the authors describe it, “a fight between two people in an area known for occasional violence, stood little chance of attracting significant media attention.”

An initial amount of national media coverage gets returns

Why then, didn’t the story die? The difference was the immediate and unrelenting efforts of the Martin family to share their story. By quickly retaining Benjamin Crump, a pro bono civil rights attorney (interestingly, one who, according to the study’s authors, ascribed the failings of a previous probono effort in part to an inadequate media publicity strategy) who brought on a local attorney who, in turn, recruited a pro bono publicist (Ryan Julison). Julison was able to get coverage from two national media outlets, which later snowballed into other national media.

From a fraternity listserv to a national online petition: Leveraging online activism yields big results

According to the study, the story (spurred by the initial limited national coverage) was mentioned on a Howard University listserv. A Howard law grad got involved and launched a Change.org petition. His rationale was the lack of national coverage. He emailed his petition to other students at the university. Yep, email is how this got started.

More national coverage, social organizations step in, and Change.org becomes “an early leader” in media attention

The Huffington Post, Global Grind, a self-described multi-racial news and lifestyle website and activist organizations (ColorOfChange and the Black Youth Project) began covering the story, described by the authors as “early amplifiers.” As a result, the change.org petition began picking up speed (growing from 217 initial signatures on day one—March 8—to over 30,000 signatures five days later (March 13).

Change.org attracts celebrities, and even more attention

Something interesting happened on the sixth day after the petition was launched. A change.org employee asked a target group of celebrities whom he thought would be sympathetic to the cause to share the petition with their fans (Mia Farrow, Spike Lee, to name a few). They were interested, and they did share—to the tune of over 80,000 signatures a few days later (a 900% increase in signatures over the course of 3 days, according to the authors of this study).

The shift to mainstream media as the news authority on the story

The pattern until March 17 (when the publicist released 911 tapes to the public and the media) was as follows: low-profile, hyperlocal news story; narrow coverage on a national level that spurs a rapid rise of personal and social activism; which yielded high-profile coverage by celebrities and a resulting increase in national coverage.

When the probono civil rights attorney (Benjamin Crump) released the 911 call to the media, coverage—particularly in mainstream broadcast radio and TV—predictably mushroomed. The authors of the study specifically point out that the audio nature of the 911 call may have made it more appealing for radio and TV to cover.

But social change and race-based organizations and celebrities continue the momentum

Reddit’s /r/blackculture subreddit featured the change.org petition and Reverend Al Sharpton’s involvement continued the publicity. By now, civil rights and political leaders all over the country were taking up the charge through political demonstrations and rallies. The authors cite the Million Hoodie March in New York (spearheaded by a digital strategist) as a catalyst for more coverage. Interestingly, the authors point out, larger media didn’t feature the story on their front pages until after the march, positing that the actuality of the march made for an easier story to cover.

News hooks in traditional media need “real” events

There’s an interesting pattern here of mainstream media not covering the Martin story until something “real” happens (the authors describe these as “actualities”). Note how radio and TV began covering the story after an audio recording was released, and front page newspaper coverage began after an actual march took place. After Zimmerman was finally taken into custody (another “real” event) six weeks after the shooting, newspaper coverage peaked.

And then, of course, the President’s March 23 statement (“If I had a son, he’d look like Trayvon” brought all news coverage to its peak.

Who influences how a message is framed by national media outlets?

Let me answer that simply—it’s not the media outlets. Nowadays, the spin on a story often takes place outside of national media news sources. Frequently, by the time they report on something, they’re simply capturing what has already happened.

So if you want to influence how the a major news outlet writes a story, your message can begin in social and digital media, and with your activists and ambassadors.

Let’s look at how the conservative movement was able to influence the debate. The study cites how one notable conservative blogger (Dan Linehan, of the Wagist blog) claimed that Trayvon was a drug dealer. As you would expect, this message was spread and picked up by like-minded right-leaning blogs, and eventually did make its way to mainstream media (the Miami Herald), where it was amplified. So, regardless of the accuracy of the claim (and it was not credible), right-wing bloggers became effective ambassadors to mainstream media.

The study’s authors actually cite research that shows that repeating a myth in order to deny its credibility may have the opposite effect.

“Research has shown that restating a myth in order to negate it can actually produce familiarity and thereby help further propagate the misinformation.”

This has strong implications for social change organizations of all stripes—the public debate is often played out as a series of narratives that are alternately supported and refuted by proponents and opponents, respectively.

Two graphics show the networks of media that mentioned “marijuana” (figure 8 in the study) and “drug dealer” (figure 9) during this period (notice how prominent the right-wing Wagist bubble is in both graphics.) The large size of the Miami Herald bubble signals high frequency of news mentions of the word “marijuana” in the story; as does the similarly large sized Wall Street Journal bubble (“drug dealer” in the same context).

Figure 8: Network of interlinked media mentioning ‘marijuana’

Figure 8: Network of interlinked media mentioning ‘marijuana’ as taken from the authors’ study

Figure 9: Network of interlinked media mentioning ‘drug dealer’

Figure 9: Network of interlinked media mentioning ‘drug dealer’ as taken from the authors’ study

How opponents can inadvertently strengthen your messaging goals

What’s interesting is how left-wing blogs and organizations join the fray and, by refuting the right-wing claims, nonetheless continue to keep the negative framing in the limelight, as evidenced by the largish bubbles that represent ThinkProgress.org, for example. The author’s conclusions:

“This suggests a strategy for reframing a story—if an activist is able to gain mainstream coverage for [framing a message a certain way], opponents are likely to respond, [thus] perpetuating a debate that features the desired framing [of the activist].

Remember, these two graphs reflect the prominence of Trayvon Martin and the words “drug dealer” and “marijuana,” an association that his supporters deemed undesirable. All started by a right-leaning blogger, perpetuated by those who countered the claim, and widely covered (eventually by papers ranging from the Wall Street Journal to the New York Times).

Piggybacking to a related cause: Stand Your Ground Laws under attack

The authors describe how an organization with a different focus, The Center for Media and Democracy (CMD), a left-leaning organization, injected its concerns about the influence of the American Legislative Exchange Council (ALEC), a conservative lobbying organization that became an outspoken proponent of the “stand your ground laws” that were used in Zimmerman’s defense of the Trayvon Martin shooting) into the debate. The Center for Media and Democracy had launched a campaign against ALEC prior to the shooting but used the controversy to strengthen its campaign. And like-minded progressive organizations formed a cascading effect, as they piggybacked off the Center’s research to pressure corporations to withdraw ALEC funding. Eventually, even Paul Krugman of the New York Times wrote an op-ed (March 25) about Trayvon and the Stand Your Ground Laws, and change.org followed suit with many petitions to dismantle these laws. According to the authors of the media mapping study, on April 17 ALEC terminated its controversial task force on those laws.

Correlation between digital and traditional media coverage and reader engagement: All news sources tend to cover issues even after reader interest wanes

The study’s findings show that all media sources, (traditional and digital) are roughly correlated (when one was covering the story, so were the others)—this extends to news articles, TV coverage, searches, petition signatures, and clicks to links (via bit.ly) on this coverage. The conversation on Twitter appeared to peter out after a while, and the authors speculate that this was either because campaigns had used Twitter early on or simply because social media may be quicker than other mediums to move away from one story to the next. Overall, however, the “tail” of news coverage went beyond actual reader engagement (sharing, clicking on links to articles, etc.) which the authors believe may indicate that readers simply lost interest even whilst the media continued coverage.

Conclusions:

1. Broadcast media still matters.

Broadcast media amplifies (spreads the story through coverage) and serves as a gatekeeper (what it chooses not to cover has a harder time getting out into the public debate, and how it frames what it does cover tends to stick). But activists who use other media channels and platforms (petitions, social media, blogging, leveraging like-minded organization and allies, personal networks) are now solidly influencing how the message is shaped and formed (framing).

2. Social change organizations can spin traditional media for their own purposes.

Even though broadcast media still serve as strong gatekeepers to what does/doesn’t get covered and how it is framed, smart organizations leverage existing coverage to inform their supporters, piggyback off the coverage to mobilize their allies, and spin it (reframing) to meet their own messaging goals. And, from a messaging perspective, it’s promising, as evidenced by how successfully many Trayvon Martin proponents were able to shift the media narrative (the outcome of the trial is another matter).

3. The blogosphere covers issues long after broadcast media coverage peaks.

Smart organizations know this, and court bloggers accordingly, understanding what motivates them to write and when. So understanding who is blogging (or has the potential to blog) about your issues and cultivating those relationships is key.

4. Contemporary news outlets today are increasingly more likely to get the maximum out their investment of time and journalists to cover a story.

News outlets will cover a story even after readers have disengaged. Don’t get too excited. This has not been covered in a flattering light (see McJournalism).

5. Social media can create related micro-stories from broader events.

These micro-stories then become news events in themselves and create a longer tail for the original story (the Million Hoodie March, for example).

6. Social media can side-step traditional media gatekeeping functions if you have good content.

Some social media platforms that are particularly well-suited to a specific type of content (YouTube or Facebook for video-sharing, for example) quite powerfully and effectively side-step traditional media’s gatekeeper role, and thus are demonstrably able to shape public opinion. Organizations that know how to create relevant content for these and other  platforms can get their message across in huge ways.

7. Social media is so much more than spreading the word.

Because it is so heavily reliant on personal interpretation (one person sharing his or her opinion about a news event, in addition to simply sharing news of the event itself), social media is a powerful force in shaping the message and framing—and the public perception—about an event.

8. Deviant discourse: Social media upends the traditional notion that mainstream media are, indeed, the gatekeepers for news content and opinion

This has its downfalls. In the past, gatekeeper news organizations simply wouldn’t cover extreme views that were a small minority of public debate. Today, if enough people talk about it, it does indeed become mainstream news (the authors point to the widespread coverage of Obama’s citizenship as a case in point). The authors explain this “deviant discourse,” as they put it, brilliantly, and it’s worth quoting here:

“Our work suggests a mechanism through which social media users introduce potentially deviant frames into the mainstream: they harness ideas to a high attention story already underway and attempt to direct the attention generated by the story towards their interpretations and views.”

9. Use finding #8 (above) for good, and not for evil, okay?

(My opinion; not the authors’.)

Hope you enjoyed this post. Mad props to the geniuses at the MIT Center for Civic Media for this incredibly data-rich study. Mindblowing stuff.

Case study: creating a 50-state data visualization on elections administration

Ever wonder how well states are running their elections systems? Want to know which state rejects the highest number of absentee ballots? Or which state has the lowest voting time? And which state has the highest rate of disability- or illness-related voting problems?

A new interactive elections tool by The Pew Charitable Trusts (the Elections Performance Index) sheds some light on many of the issues that affect how well states administer the process of ensuring that their citizens have the ability to vote and to have those votes counted. Measuring these and other indicators (17 in all, count ‘em), Pew’s elections geeks (I was a part of the team) partnered with Pitch Interactive to develop a first-of-its-kind-tool to see how states fare. Today’s post is a quick take on how the project was created from a data visualization perspective.

Pew Election Performance Index interactive

Pew’s latest elections interactive: The Elections Performance Index

Lots of data here, folks. 50 states (and the District), two elections (2008 presidential and 2010 mid-term) and 17 ways to measure performance. Add to that the ability to allow viewers to make their own judgments–there is an overall score, for sure–but the beauty of this tool is that it allows users to slice and dice the data along some or all indicators, years and states to create custom views and rankings of the data.

You might already know about Pitch Interactive. They’re the developers who created the remarkably cool and techy interactive that tracks government-sponsored weapons/ammunition transactions for Google’s Chrome workshop (view this in Chrome) as well as static graphics like Popular Science’s Evolution of Innovation and Wired’s 24 hours of 311 calls in New York.

The data will dictate your approach to a good visualization

When we sat down with Pitch to kick around ideas for the elections interactive, we were initially inspired by Moritz Stefaner’s very elegant Your Better Life visualization, a tool that measures 11 indicators of quality of life in the 30-plus member countries of the Organization for Economic Cooperation and Development (OECD). Take a look–it’s a beautiful representation of data.

And though, initially, we thought that our interactive might go in the same direction, a deeper dive into the data proved otherwise. Comparing 30 countries along 11 indicators is very different than comparing 50 states plus DC, 17 indicators and 2 election cycles. Add to that the moving target of creating an algorithm to calculate indicators for different user-selected combinations, and you’ve got yourself a project.

After our interactive was live, I talked to Wesley Grubbs (founder and creative director at Pitch) about the project. I was interested in hearing about the hurdles that the data and design presented and how his creativity was challenged when working with the elections data. One of the first things that he recalled was the sheer quantity of data, and the complications of measuring indicators along very different election cycles. If this sounds too wonky, bear with me. Remember, one of the cool things about this interactive is that it allows you to see voter patterns (e.g., voter turnout) along two very different types of elections–mid-term elections (when many states elect their governors, their members of Congress and, in many cases, municipal elections) and the higher-profile presidential elections. Pitting these two against one another is a bit like comparing the proverbial apples and oranges. Voting patterns are dramatically different. (The highest rate of voter turnout in 2008–a presidential election–was 78.1 % in Minnesota. Compare that to the highest rate in the 2010 midterm election–56% for Maine, and you’ll see what I mean.)

Your audiences will influence your design

Another challenge early on was the tension between artistry and function. In an ideal world, the most beautiful thing is the most clear thing (an earlier post, “Should graphics be easy to understand?“, delves into this further). I remember reviewing the awesomeness behind Wes and his team’s early representations of the data. From my perspective as a designer, these were breathtakingly visual concepts that, to those who hung in there, served up beauty as well as clarity. But from a more pragmatic perspective, an analysis of our audience (policymakers and key influencers as well as members of the media and state election administration officials) revealed that the comfort-level with more abstract forms of visualizations was bound to be a mixed bag. Above all else, we needed to be clear and straightforward, getting to the data as quickly as possible.

Wes decided to do just that. “It’s funny,” he said. “We don’t often use bar graphs in our work. But in this case we asked, what’s the most basic way to do rankings? And we realized, it’s simple. You put things on top of one another. So what’s more basic than a bar chart?”

“We had to build trust–you can’t show sparkle balls flying across the screen to impress [your users]–you have to impress them with the data.”–Wesley Grubbs, Pitch Interactive

When I asked Wes how, at the time, he had felt about possibly letting go of some of the crazy creativity that led him to create the Google weapons/ammunitions graphic, he simply responded, “Well, yes, we do lots of cutting edge, wild and crazy stuff. In this case, however, a good developer is going to go where the data leads them. In addition, the audiences for this tool are journalists, academics, media–the range of tech-saavyness is very broad. We had to build trust–you can’t show sparkle balls flying across the screen to impress them–you have to impress them with the data.”

Turn your challenges into an asset

When we brought up the oft-cited concern around vertical space (“How long do you expect people to scroll for 50 states, Wes?”, I remember asking) his approach was straightforward: “Let’s blow up the bar chart and make it an intentional use of vertical space. Let’s make the user scroll–build that into the design instead of trying to cram everything above the fold.”

I think it worked. This is a terrific example of visualization experts who, responsibly, put the data and the end users above all else. “We could have wound up with a beautiful visualization that only some of our audiences understood,” says Wes. “We opted to design something accessible to everyone.”

How did Pitch build the Elections Performance Index tool?

Primarily using D3, a javascript library that many developers are now using for visualizations. It was not without its drawbacks, however. When I asked Wes about lessons learned, the first thing that he mentioned was the importance of understanding the impact of end-user technology on different programming languages. “D3 isn’t for everyone,” he notes. “Take a look at your users. What browsers are they using? The older stuff simply won’t work with many of the best tools of today. You have to scale back expectations at the beginning. The hardest part can be convincing organizations that the cutting-edge stuff requires modern technology and their users may not be in line with that. It’s all about the end user.”

Well, as an end user and a participant in the process I’m pleased. I hope you’ll agree to take the tool for a spin.

The “art” of compromise: Is there room for compromise in designing data graphics?

In my last post, I discussed how expectations and perceptions of designers are as important to quality data visualizations as are more conventional resources, such as time, people and money. But there is also a flip side to this–there are times when, as designers, we may be faced with a choice to compromise on how we present data. The compromises we agree to–or reject–are as important to our field as anything else. (Kudos to me for resisting the urge to title this “drawing the line in infographics.”)

A friend related to me a recent conversation in which an art director who, when presented with a bar graph of extreme values (very high and very low), asked the designer to “fudge” the size of the smaller bars. (They were visible–not hairline–but too small to comfortably fit the values inside of them. Presumably the art director wanted to nudge them up so that the numbers would fit inside of the bars.) My initial reaction was er… not favorable. I felt like a mother bear protecting her cubs (the cubs, in this tortured analogy, are the data). I may have uttered a few choice words, even.

The ethics of compromise.

But, once I calmed myself down, it occurred to me that this might be something interesting to write about. I polled a few designer and non-designer friends. What do you think, I asked. Was this a bow to art or clarity? Was it an unintentional breach of ethics or a well-intentioned attempt to make information easier to understand? Was it goal-driven or just lack of creativity? Don’t jump on the art director just yet. This isn’t about the choice that person made (that’s the subject of another post). It simply reflects the reality that, as in other professions, we’ll all be asked to make choices that, to others, may appear to be inconsequential. We need to make sure we handle these choices intentionally and carefully.

Here’s what came to mind after my conversations with other designers.

Book-binding: an invisible art

Let’s think about the book-binding trade of back in the day. The men (mostly men, anyway) who bound books hundreds of years ago were tradesmen. They had a craft which they revered. They apprenticed and, as journeymen, they traveled from place to place, learning and honing their craft to become–eventually–book-binders. This is not unlike the path that many information designers take today.

For all the painstaking zeal and meticulousness put into the binding of the book, the end result was rarely if ever examined once produced. If the thing didn’t fall apart in your hands, you were satisfied as a consumer.

I won’t bore you with the mechanics, but suffice to say that binding a book involved a lot of work, much of which was invisible to the eventual and subsequent owners. Once purchased, the book was read, perhaps the craftsmanship briefly admired, and then it was shelved or passed on, sometimes for generations (think of the family Bible). And yet, for all the painstaking zeal and meticulousness put into the binding of the book, the end result was rarely if ever examined once produced. Again, not unlike the process of visualizing data, much of the effort and care involved in sewing pages into folios, hand-stitching the spine–remained largely unseen. If the thing didn’t fall apart in your hands you were satisfied. End of story. And yet, despite this invisibility, these bookbinders pursued their craft with diligence and and care nonetheless. How well or how poorly they plied their trade was not immediately evident, as these old books often outlived their makers. They had no immediate incentive to be unduly diligent. And yet, I like to think that most of them did not cut corners. Why? I’d say it was self-respect and public recognition of the importance of their craft. Maybe I’m over romanticizing books (I do collect them).

Our craft: Are we short-order cooks or visual content experts?

My point? This is an issue of the ethics of our craft. As designers, we need to ask ourselves: are we short-order cooks or visual content experts? Are we hacks or tradesmen/women? Is data visualization a craft or only a paycheck? Is data an obstacle to be overcome or a living boundary that, with each challenge, offers us the opportunity to learn, do better, and to empower our readers by bringing information to the surface in a manner that brings with it a new understanding? And while, from the perspective of the client (or, in this case, the art director) it may not always be apparent that the accommodations they ask us to make are not wise, it is–nonetheless–our responsibility to do the right thing, and bring others along. In this way, we advance the field forward and our professionalism as well.

And that’s the crux of this post.

Whatever your intentions, what is the effect of the small compromises that you make in being precise, transparent and correct in how you present data?

The more seasoned amongst you may shake your heads and think that these things are self-evident. But to those of you who are just starting out (be it as younger designers or managers in charge of new data viz projects), this may not be something you’ve thought much about. It may not even seem like much of a big deal to you.

Making those small compromises weighs on you, wears you down and–worse–makes the next compromise all the greater in scope and easier to bear.

What is the effect of compromise on the designer and the team?

So, what happens when a designer makes those compromises? When I asked a few designers, they all had one response in common: morale and self-esteem. Here’s the thing: making that one small edit will be invisible to everyone but you. It’s not like your readers will ask you to send them your Illustrator file so that they can measure pixels before they read further. Like the bookbinder who sewed thread onto page folios, no one but you will see the guts of your files. But making those small compromises weighs on you, wears you down and–worse–makes the next compromise all the greater in scope and easier to bear. And these things add up to the slow devolution of what was once a craftsman/woman (if I may be allowed to use such an archaic term) to a hack.

And what happens when an art director suggests those compromises? Well, you risk losing the respect of seasoned members of your team, that’s obvious. Worse, you risk creating an environment that is progressively sloppy. And while no one will catch the small compromises, they sure as hell will catch the big ones. Remember the infamous Fox piechart?

Other examples of altering data

It doesn’t stop with information designers, as I’m sure you know. Another designer who Photoshops medical imagery (for example, a CT scan or slides of cancer cells) told me about a doctor who, when preparing images of slides for a research publication, asked the designer to darken some areas to make them more visible (thus allowing him to better make his case). The designer balked–these aren’t just pictures, he told the doctor–they’re data.

And if you want a more mainstream example, how about the furor over the Time cover of OJ Simpson in 1994? Or, more recently (2008), the Hillary Clinton ad which featured then Presidential candidate Obama with arguably darker skin?

What is unacceptable compromise to one might be reasonable accommodation to another.

There may not be room to make the wrong compromises, but there is always opportunity for discussion.

No one is perfect. And each of the examples that I gave leave plenty of room for discussion. As a newspaper friend recently noted, some photographers are adamant about not retouching any photos they take–including not cutting subjects out of backgrounds. Others are not as rigid. And not all of the participants in my informal poll reacted with extreme horror at the thought of slightly lengthening bars. Some merely grimaced. But all agreed that if you’re going to tread on thin ice, you’d best aware of it. Another friend points out that he noticed a disconnect between his former employer (a newspaper) and his current one (a corporation). He’s doing the same work–designing information graphics. But whilst former journalist colleagues (having their own code of ethics) would never have asked him to fudge the appearance of data, he feels that–in his current role as a designer in the corporate world–his colleagues have a lesser understanding and appreciation of what asking this might mean.

This isn’t necessarily a bad thing–handled correctly, it can present an opportunity for education. But you have to be willing to put yourself out there–a place that not everyone (perhaps less experienced designers or as employees with less seniority) is comfortable occupying.

As designers, let us be keenly aware of how the small choices we make for ourselves can add up to large consequences for our profession. I’d love to hear more from you on this. Have you been place in similar situations? How did you handle them?

Infographics: Does time equal quality?

Does time equal quality in good infographics? Nope, not necessarily. I’ve been giving this a lot of thought lately and, in reading recent posts by Seth Godin and Alberto Cairo, it’s interesting to see how each touches upon what I see as the pressures and attitudes that affect how well we design good information graphics.

In Mr. Godin’s case, he mentions what he calls “the attention paradox.” While he’s not specifically writing about design, his comments nonetheless aptly relate to the work designers do. As more marketers crave attention, the more they’re willing to part with content that is good at reaching an audience, and terrible at retaining it. Makes sense, right? In a time in which we’re increasingly consumed with tracking metrics and measuring success by the numbers it is par for the course to get caught up in the rat race for the next big thing (big being determined by 30-second relevance and traffic for that day). Surprisingly, information graphics are no exception. And why should they be?

I recently mentioned that, because we’re all under pressure to create more and more content, “repurposing” content is seen as a good way to take advantage of the sweat equity put into other pieces (web articles, reports, data collection) and to convert that into an infographic. This pressure to produce can have real drawbacks–clients mistakenly assume that information can be quickly “designed” just because in their estimation, the facts and the message have already been proscribed. Here–quality can suffer from lack of time. But the point that I was really getting at in my post, which I unfortunately failed to articulate clearly–was the designer’s role.

When designers are treated as service desks and not content experts (“Here are the facts, here is the message, now please make this pretty. Call me when you’re done.”), you simply don’t get the best work.

Fortunately, Alberto Cairo, in “Empower your infographics, visualization, and data teams” gets to the point. According to Mr. Cairo (and I agree) the real problem is the limited perception of the designer’s role. He mentions how, in news rooms, graphic designers are often seen as “service desks.” This isn’t limited to news rooms. In my own life, I occasionally get requests to design graphics “you know, like the New York Times” (yes, I really do). As Mr. Cairo points out, we all laud the New York Times and other large media outlets (one of my personal favorites is New Scientist) for their high-quality information graphics–pieces that can take months to make with large teams of content producers and designers in place. I agree with Mr. Cairo’s perspective that this fact might lead you to erroneously conclude that time and staffing (more people, more time) equals great work (bluntly, he says, “You can’t.”).

The solution lies, in part, in treating and using your designers as partners who help to shape content effectively.

So, what does this mean, exactly? Bring your designer into the room when you’re having editorial discussions about how to create content, before you’ve decided on what shape that content will take. Listen to your designers and expect them to offer up ideas about how to turn that into information design (be it static, motion or interactive).

Designers should read the content.

Expect your designer to read, read, read and understand. I ask my designers to read research reports before they create infographics or data visualizations. This may be a “duh” moment to some of you, but you’d be surprised how many people (including designers) don’t think of this or, worse, don’t see this as part of the designer’s role. How do you design what you don’t understand? How do you filter out the best parts of information and data without having reviewed the source?

And don’t micromanage the design. Leave them alone to create and use their expertise. Trust them, as content partners, to visualize not just the data, not just the facts, but the voice that carries the design.

I’m sure there’s more and would love to hear from you about what other recommendations you have.

Building good infographics part 2: Know your data, know your story

In the first article of this series, we discussed how good planning and team dynamics can make or break even the best design ideas for an info graphic.

In this second article, you’ll learn how to bring together your data and your story into a solid sketch that you can later present.

Part 2: Get to know your data

Try to get to know your data in the beginning, but recognize that, when working with a lot of data, you’ll likely have to keep going back to the numbers as your sketch evolves. Essentially, you’ll need to ensure the numbers support your headlines and story. If you’ve already created other products which this graphic will be promoting, presumably you’re familiar with the data and how it was illustrated for these other pieces–that’s a good starting point. If you’re creating a standalone graphic, you may be staring at Excel files for the first time. Regardless, get to know the data with fresh eyes.

To start, go over the basics. At first blush, what are the most apparent patterns and trends in the data?

To start, go over the basics. At first blush, what are the most apparent patterns and trends? I write mine down on a scratch pad or white board (in red ink, if you’re wondering). For example, is widget use going up or down? Are there geographic or demographic patterns (use is up in the south, down in the north; younger people use more widgets, etc.). I usually do this with my designer, at least for the big-picture stuff.

Then, dig a little deeper.

I like to look at data two ways: if I can, I’ll review the original “raw” data. Typically, when working with large sets of data, much of it it used to create calculations that produce the final data (the stuff that makes its way into Powerpoint, reports and graphics). But sometimes the original data can show interesting things. This raw data (if I understand it) allows me to see parts that were left out of other products (cutting room floor stuff) but that might help serve up a good infographic. Once I review the raw data, only then will I review graphics (if they exist) created from that data for other pieces.

For me, this order works because starting from the raw data helps keep me from forming a bias in favor of things already illustrated when, in fact, there could be a goldmine somewhere in the original data that better serves my graphic’s audience and story. Sometimes (depending on the quantity of the data) the former isn’t remotely possible for a layperson. Many professionals out there use statistical modeling software such as R or SAS  to be able to understand complex data sets. For the purposes of this article, I’m assuming that, like me, you’re the average layperson with a basic understanding of math and Excel.

Talk to your number crunchers with your designer to help both of you understand the data. If you have access to these folks, don’t be shy about going to them and asking them to explain/simplify things you don’t understand. (Often, inscrutible headings and data (likely spit out as a query by a technical person working with a database) can be rewritten and hidden (respectively) to spit out what you need to know. Having a good conversation with your database, IT or researcher is important. Tell them what you want to say, and who you want to say it to. Share a sketch with them so that they can understand the output goal. Oftentimes, they can be a tremendous resource by helping you mine your data. But they can’t do that if you don’t share the big picture with them.

Part 3: Executing the concept

You’ve pitched the concept to your team and now you’ve got the green light. You’ve identified your data sources and are familiar with overall patterns and trends. You’re reasonably confident that those patterns support the key messages that your audience wants to hear.  Congrats. Tank up on coffee, sharpen your pencils, turn off your e-mail notifications and get started with your designer.

Let’s begin by mapping out the actual content. This will eventually lead you to an outline that does two things:

  • Allows you to “read” the content at a high level, which mimics the way most consumers scan (they scan headlines, they scan graphics, etc.).
  • Allows you to begin exploring format–is the content and data best suited for a static graphic or an interactive?

Get the story right from the beginning by chunking out your content into essentials via a paper and pencil sketch. Here, you’re essentially developing a content outline (or several). I usually ask designers to sketch something in pencil rather via computer.

Sketches have a way of relaxing people–they level the playing field, so to speak, by presenting information in a low-key, low-tech manner that is specific enough to give you the gist of the story and data, yet not so specific that stakeholders are led astray into conversations about wordsmithing, color, fonts, etc.

In my experience, there are also designers who, once they spend time “drawing” on the computer, begin to take immediate ownership of that particular design. This can needlessly bias them against the feedback of the team. In my opinion, paper and pencil sketches allow everyone to keep an open mind and focus on the essentials–the rough findings, the order and the data. Leave the wordsmithing and the design to later–get the story right first.

Chunking out your content–how to create a proper sketch. A good sketch chunks out your text into major findings/rough headlines, and illustrates and annotates the order and flow of all of your major graphics and illustrations.

List out your major findings/sections. Start with a large piece of paper and write out the main findings that you want to show in the graphic. These will eventually become succinct headlines, thought-provoking questions, etc. Regardless of what format they ultimately take, they will organize your graphic into major sections (many people call this “chunking out text”). Your goal for these sections should be so that, if the reader reads nothing else, strung together, those headlines would tell your story.

Say you’re selling widgets to engineers. Keep your audience in mind (the audience likely to read your graphic) and write down 3-5 findings:

  • Widgets are taking over the world.
  • Widgets are cheap to produce.
  • Widgets are available near you.

You can turn these points into headlines later. For now, they tell you that your graphic will be divided into three key sections for which you will need to have the basics:

  • headlines
  • brief explanatory or persuasive text that follows/explains each headline
  • illustrations and/or data graphics that support the headline
  • sometimes accompanying text that further highlights the main finding of each data graphic or illustration

Map out the data that supports your findings. You and your designer can use your understanding of the data to pare it down into simpler graphs that show the findings. Your designer can suggest the format, and you can provide the audience’s perspective of whether the designer’s suggestion does two important things:

  • Does the graphic and the numbers support the claim (e.g., widgets are taking over the world)?
  • Does the format (bar chart, pie chart, etc) make it very simple to understand the data?

You can make the graphics complex later, but for now simply draw them so that they show the gist of the data. This pared down graph is what goes into your sketch below each headline. Again, you don’t need to ask your designer to draw the graph with each data point, merely a simple representation of what it will ultimately show.

For example, in your sketch, your first “chunk” of text reads “Widgets are taking over the world.” Let’s say you have data showing how 14 of the 20 most developed nations are using widgets more than thingamajigs. This is where you put more effort into data review than you do into illustration.

Remember, just because you say it doesn’t mean you can show it. Review your data to make sure the numbers support the claim.

Remember earlier when I mentioned that you’ll likely need to keep going back to your data to confirm that it supports your content? This is one of those times. Go over the data very, very closely and confirm that, indeed, the data does support your message. It’s one thing to say that most G20 countries use widgets more than thingamajigs, but how *much* more? Therein lies the rub, folks.

In a sentence, you can get away with throwing around words like “most” and “more.” But in a graphic, you have to illustrate that sentence and if the visuals don’t support the claim, you’ll have a disconnect–your headline says one thing but the data show another.

Here’s an example. Say that widget use is just barely eeking out a lead in each of the 14 G20 countries where widgets are used more than thingamajigs. Trying drawing a bar graph of the data. It won’t look very compelling–you’ll have 14 bars showing widgets and thingamajigs almost neck-and-neck. That’s fine, but remember that your message was “Widgets are taking over the world.” It seems hardly appropriate now, doesn’t it?

That type of examination is a phenomenal reality check to ensure that message and data support each other. Based on the above scenario, you can now revamp your claim (“Widgets gaining ground over thingamajigs.”). You can add more data and tweak your message (“Widgets gaining ground over thingamajigs–expected to double by 2020.”)

Format is everything. Next, have your designer explore the best format to show each graphic. If I’m designing something (depending on complexity), I might have my first sketch be two things–an outline of content and data findings along with concrete examples of how I’d like to design the data (e.g., a bar graph here, a piechart there, a map over there). Or, I’ll create the sketch in two passes: the first showing the content outline and using a barebones format for all graphics (e.g., I’ll use a barchart for everything and tell the team that later I’ll come up with the specific formats). If the infographic and data are simple, I’ll do both in one iteration.

Regardless, use the sketch as your opportunity to create (especially on the second iteration) graphics that are reasonable facsimilies of what readers will ultimately see. Format is everything. If you have 7 categories of percentages for one year and would like to show how those percentages changed the next year, it is probably not wise to put these data into two piecharts and ask people to compare change over time. So why draw that into the infographic? Draw it the way you’ll illustrate it (as two stack bar charts, or as a line graph that simply shows change in percentage over time, for example).

Annotate your sketch. Next, put in brief notes about each graph or illustration. For now, think of these explanations as easy to understand notes for your team and reviewers to help them understand the point or findings of each graphic (e.g., this graphic shows how sales of widgets will double over the next 8 years). Later, you can repurpose those notes as “chatter” that you incorporate next to each graphic.

And your designer will likely turn some of those annotations into visual elements. For example, for a graph that compares the cost of widgets to thingamajigs, you can annotate the graphic with the findings: “Widgets now cost half as much to produce as thingamajigs”. Your designer might see the word “half,” ask you what the actual number is (say, 53%) and create a design element around it. You never know where these bits of information will take you (that’s the designer’s job) but try to annotate each element with what you want the reader to remember or learn. This makes it easier for your team to understand the sketch and, again, can be turned into copy or design later.

You can use this same approach for illustrations–you can choose to either write in references to them quickly (e.g., draw boxes next to each graphic that say “icon of widgets will go here” and “country flags here”). Or you can illustrate them. I prefer to illustrate concepts that I’m not sure the team will understand–it forces the issue early and visually–and generally prevents me from spending time later killing myself over the design of an element that the team ultimately rejects.

Explore formats–static, interactive or other? I mention this close to the end of this section, but in reality you should, in the back of your mind, be evaluating the emerging sketch for possibilities that it presents for interactivity. You may have decided early on (due to budget, technical or time constraints) that you’re set on a static graphic. That’s fine. But recognize that certain stories and data lend themselves better to specific formats. Your sketch will allow you to determine the best format or, if you’ve settled on a format already, how to better present the information so that it is suited to the best format.

How do I choose the right format for my story and my data? I work with lots of 50-state data and 50-state maps, so–for me–this question comes up frequently. Here are a few examples that show you how your data can be shown in both a static and an interactive format, ranging from the simple to the more complex:

Let’s say you want to show widget use in the 50 states. You want to show:

  • -Which states have widgets today (this is a yes/no question).
  • -How widget use across the states has increased or decreased over the past 10 years (each state has a percentage associated with it)

You can create an infographic that shows this easily:

  • Create one map for “which states have widgets today” and simply color code the yes/no values (states that use widgets are green–those that don’t are read). Done.
  • Create a second map for “Increase/Decrease in Widget Use: 2002-2012.” Take the percentages in Excel, sort them from largest to smallest, and split them up into groups (for example, into thirds). Now you have a list of top third states, middle third states and bottom third states. Assign each category a color (top third = dark green, middle third = medium green, bottom third = light green) and apply those colors to your map. There’s your infographic.

But look at your data carefully. You have actual percentages for each state which you left out–you simply lumped those numbers into groups and color-coded the states. You could sneak in those percentages into each state (difficult to do depending on real estate and whether you want to also put state abbreviations into the map). Maybe you can use icons to denote specific values (e.g., the top 5 states). You can put callouts close to the map which talk about interesting states and their data. You could put a table with the data below the graphic. Or you could eschew the data altogether because it’s not important–lots of possibilities.

Whatever you do, the advantage of the above approach is that the information is sticky–people can compare two simple things on the screen at one time. If this is a priority for you, then static (in the example above) could be the way to go. It’s low-tech and concise, and easily allows users to compare two similar concepts.

But what if real estate is an issue? What if you do want to show more data (those percentages, for example)? And what if, for your audience, comparing side-by-side is less of a priority–the data is what you need to show?

Then you might be looking at an interactive that allows people to view one map in two ways (I’m oversimplifying here to make a point–there are many more possibilities). Give people a choice between two views: show me which states use widgets (map changes to green/red states). Now show me how widget use has increased/decreased over the past 10 years (map can show each state color-coded and when you mouse over you can see the percentages, for example. I almost hesitate to offer this example because there is so much more that you can do, but hopefully it’s helpful to see the difference, albeit over simplified.

Here’s another thing to consider: motion graphics. What if the data you have is a chart that is complex? You can split up the chart into several charts and spend considerable time explaining (annotating) each one, and (as important) writing about how one chart is related to the next one. Or…you can produce a motion graphic. These have been increasingly used lately (the New York Times is doing lots of them) and I’m not yet a huge fan of the treatment. Too many producers are essentially creating videos with a lot of talk and graphics that flit about the screen for effect. I’m a fan of what I call “explainers”–people who walk you through a complex graphic or set of data. Check out Hans Rosling’s video to see what I mean.

Because this post is about how to plan for visualizing information, not necessarily about the merits of static versus interactive graphics, I’ll stop here. I really haven’t done much justice to the complexity of the decision-making process. That’s for a later post. If you’re interested in reading more, Column Five recently wrote a nice article on interactives.

But I do want to point out that your sketch and planning conversations are exactly the right point to start the conversation about format. If you find yourself running out of room, explaining too much, or creating very repetitive graphics that show the same data in different ways, stop and ask yourself if you’re considering the right format.

Okay, you’ve got your sketch in hand and recommendations on format. Let’s move on to the final article in this series, which will explain how to share the concept to your team, manage expectations, and execute the rest of the design process.