Lying About Our Religion, And Other Problems With Polling

Robert Wuthnow isn’t going to say it, so I will: a lot of religion polling is bullshit.

This isn’t just an academic squabble. Polling firms like Pew and Gallup offer the principal lens through which journalists, pundits, religious leaders, and politicians observe the spiritual state of the union. Polls and surveys shape debates about policy and public morality. They help define key groups—evangelicals, for example, or the nones—that crop up again and again in the media.

They are also misleading us, Wuthnow argues in his timely, obscenity-free new book, Inventing American Religion. Wuthnow, a sociologist of religion whose work has informed a generation of scholarship, examines how polling has shaped perceptions of religion in the United States over the past century. And he argues, convincingly, that pollsters and journalists need to be more honest about what these studies can, and cannot, tell us.

It’s a shame, honestly, that Wuthnow has limited himself to religion polling, and to a fairly narrow critique of these polls’ limitations. The questions at play, here, are much larger. What kinds of conclusions can we reasonably draw about that shifty behemoth, the American Public? And what responsibility does the media have when making generalizations about identity and belief?

Polls address a problem that would have been unfamiliar to our democratic forebears. In a democracy with hundreds of millions of people, how do you know what the public thinks and wants? How do you figure out what binds them together, besides an annual obligation to the IRS and a love of fireworks? In short: how do you know what the public is? Like many hard questions, these problems have been rendered largely invisible, in no small part because “The Public” and “The American People” are favorite fictional characters for politicians and journalists, who speak of them without a trace of precision.

So let’s indulge in a quick reality check. The Super Bowl—that national spectacle that unites us around the flickering LCD hearth—had 115 million viewers in the United States last February; in other words, nearly two-thirds of us weren’t watching it. The most-viewed political spectacle of the year, the State of the Union address, draws around 10% of the population. Barack Obama won the 2012 presidential election with 62 million votes, meaning that fewer than 20% of us voted for him.

The people have spoken…kind of.

A viral post or article—the kind that fills your Facebook newsfeed and comes up in conversation at low-key parties—will be lucky to attract more than a couple million readers, out of the country’s more than 200 million literate adults.

In other words, the “public” is more scattered, niche-y, and diverse than you might think, given the way the term is used in highbrow media, or in political appeals like the State of the Union (not that you watched it). Polling is one way to get a handle on the madness. The basic principle is simple: a randomly selected sample of the population, when surveyed, provides a representative microcosm of the elusive whole. In more technical terms, pollsters direct a shrink-ray at the public, and then examine it up close.

When it comes to predicting elections, this method works extraordinarily well; elections are the raison d’être, the breadwinner, and the testing ground for pollsters. Their projections are often spot-on.

George Gallup, founder of the eponymous firm, was the Nate Silver of his day. His correct prediction of the 1936 presidential election helped give him the credibility to branch out into other areas. In 1944, Gallup polls first asked Americans whether they believed in God. Since then, major polling firms have been making regular sweeps of the nation’s spiritual psyche. Census data and other federal surveys ask few, if any, questions about religious practice, faith, or moral beliefs. That leaves a big niche to fill.

But there are some key assumptions at play here, most of which break down a bit under closer analysis. What you tell a pollster about an election (“I’m voting for Marco Rubio”) and what you tell a pollster about faith (“I believe in God”) are not interchangeable statements. Elections offer discrete, limited, and registerable choices; they are giant polls, which pollsters replicate at a smaller scale, using the magic of statistics.

When two people say they’re voting for Marco Rubio, they’re voting for the same Marco Rubio. When two people say they believe in God, there’s a lot of gray space left to fill, presumably with the kind of conversation and investigation that does not translate easily into percentages and pie charts.

A whole lot of anthropological work has mapped the ways that individuals adjust their beliefs across contexts, and through time. And some sharp sociological research has mapped the extraordinary degree to which individuals lie to pollsters about their religious practices. None of this necessarily gets picked up over the phone, by a surveyor working through a scripted questionnaire. Gauging something as amorphous and context-rich as religiosity, within the framework of something as amorphous and vast as The Public, is difficult work.

It’s even harder because The Public doesn’t like to answer its phones. Some people are more difficult to reach than others, such that these representative samples may not be so representative. Wuthnow points out that response rates for surveys have been declining steadily over the decades. Today, even a high-quality Pew study is unlikely to get even 25% of its randomly selected households to respond. Some recent Pew reports have reached just 9% of their intended sample. That 9% may still express beliefs that are representative of the public at large.

But other times, the evidence indicates, it does not.

As a result, there’s a profound disjunct between what studies actually say, and how they’re reported as facts. Just to give an example: earlier this year, citing Pew data, CNN reported that “More than one-third of millennials now say they are unaffiliated with any faith, up 10 percentage points since 2007.”

But what CNN really meant was “When asked by a complete stranger, over the phone, to define their spiritual lives in a single phrase, one-third of those adults who responded to this poll (response rate: one in ten), and who fall within an arbitrarily-defined generational frame, declined to tell the aforementioned complete stranger that they were affiliated with any specific religious group.”

These are not equivalent statements. For some reason, though, journalistic standards don’t require writers to acknowledge that distinction.

Manufacturing publics

So what do these polls and surveys do, exactly? We imagine that they measure public opinion. But we could just as easily say that polls use measures of opinion in order to manufacture publics.

Consider the rise of the category “evangelical.” In one of the book’s most interesting sections, Wuthnow talks about the emergence of evangelicals as a political body in the wake of Jimmy Carter’s nomination to the Democratic presidential ticket in 1976. After Carter described himself as an evangelical, polling firms rushed to map the extent and influence of this group. In doing so, they actually helped define it (and possibly overinflate it). Evangelical leaders trying to help build political muscle for their flocks—especially Jerry Falwell and Pat Robertson—encouraged these efforts, cited the polls, and at times even commissioned them.

Today, when media outlets talk about “evangelicals,” they almost always mean white, politically conservative, born-again Protestants, unless they specify otherwise.

The problem is that theological definitions of what makes someone an evangelical vary widely—and tweaking them, Wuthnow shows us, can generate substantial changes in poll results. Nobody actually agrees on the borders of this group. Many of those definitions also map onto a much more politically and ethnically diverse set of people than white heartland social conservatives. Black Protestants and Latino Pentecostals, in particular, tend to get ignored in this definition of evangelical, even though many of them fit the bill quite nicely in strictly theological terms.

Additionally, there are other divisions—particularly, between charismatic and non-charismatic churches—that often get ignored in the polling and reporting, for no clear reason.

Nevertheless, a particular vision of an evangelical bloc emerges in national discourse, in no small part thanks to the generalizing power of polls. Much the same seems to be happening with the nones today. At a more granular level, the nones—people who say that they don’t affiliate with any religious group—are a diverse lot, including atheists, agnostics, and people who say that they believe in God, but don’t feel comfortable in a specific institution.

The more closely we view them, the more fluid and diverse the category seems to be. Follow up with self-reported nones in a year, and many of them (as many as 30%) will say they’re now religiously affiliated. But when defined as a single entity, a new, largely fictional public emerges, ready to be taken as the harbinger of some secular age.

This process of lumping-and-fretting applies to all sorts of categories, not just religious ones. Generations offer a particularly egregious example. Who decided that all people born between 1982 and 2000 would be one single unit, the millennials? Why not 1992-2010, or 1985-2005, or some other range? The term gets thrown around so much, and by such reputable institutions, that you can forget that it was invented in 1991 by a pair of marketing consultants, whose methods were, at best, sketchy.

The persistence of these categories shouldn’t surprise anyone. Generalizations are appealing. They make good copy for journalists. “Polls that produced generalizations about the national population,” Wuthnow writes, “spoke to the nation’s historical awareness of itself as a distinct people.” In the case of religion, they made it possible to imagine some kind of national spiritual psyche—to invent American religion.

Are these generalizations harmful? It’s striking that one of the first great statistical generalizers of American religion, Will Herberg, was from a minority religion. In the 1950s, Herberg used Gallup data to paint a picture of a more unified American belief—one in which Catholics, Protestants, and Jews shared a common faith in God.

That seems like a lovely interfaith impulse. But there’s much in this process of generalization to make the contemporary observer queasy. In particular, polls and surveys tend to obscure minority opinions, through a process called norming. This happens often with race; white norming, Wuthnow writes, is “the phenomenon of drawing generalizations about the American public, meaning all of the American population, based on evidence that mostly reflects the sampled responses of the white majority.”

Christian norming happens, too: a poll that supposedly reflects the nation’s views on God is, often, mostly talking about Christian views. In doing so, it’s essentially conflating “Christian” with “American,” a strategy typically associated more with Ben Carson than with Gallup.

More generally, polls obscure a certain messiness in the American polity. Syncretism, idiosyncrasy, and inconsistency are regular features of individuals’ spiritual lives. Little of that gets captured when you scale people up into categories.

The complexities and weirdnesses of life

Should the polling firms themselves be blamed for these fictions? They’re just asking the questions, after all. Pew, in particular, is admirably open and precise about its methods and terms. And much of the problem comes when information that’s useful in a limited, qualified way gets spun into generalization and speculation in the media.

At the same time, polling firms do a lot to manufacture the appearance of certainty. In particular, they choose to report statistics as single numbers (for example, Pew recently found that 24% of Jehovah’s Witnesses describe themselves as born-again) rather than as margin-of-error ranges, normalized around a mean (had it taken that margin into account up-front, the Pew study would have reported that 16.8%-31.2% of Jehovah’s Witnesses believe they’re born again; somehow, that sounds less authoritative).

It’s also important to recognize that these surveys exist in a careful reciprocity with media organizations. Unlike academic work, polls and surveys are often commissioned by the very people who then report on them. Even if Pew or Gallup come to careful, qualified conclusions in their own reports, they have to be sensitive to the way that these results will be picked up by the very media outlets they’re intended to serve.

At times, the collusion between reporters and pollsters can verge on the sleazy. In 2013, for example, Pew published a widely reported study of American Jews. The study reported declining affiliation and rising intermarriage in the Jewish community. In the blitz of hand-wringing and fear-mongering that followed the report’s publication, the Jewish Daily Forward led the way with its extensive, frenzied coverage of the findings. (In all fairness, the Forward also published some more restrained, optimistic analysis). The Forward even used the survey’s more worrying results in a fundraising pitch. What few people mentioned was that the Forward had asked Pew to conduct the study in the first place.

In many ways, Robert Wuthnow, who brings a powerful academic reputation to this study, is the perfect person to critique this culture: he can afford to offend whomever he wants. It’s disappointing, then, that Wuthnow does not venture a more activist stance. After laying out an extended case that pollsters are marketing misleading results—a process called, in less polite company, “deception”—Wuthnow ends with a mild call for “closer and more critical scrutiny” of polls. Hasn’t he just written an entire book doing exactly that?

The final line of Inventing American Religion—“much more about religion remains to be understood”—was, to this reader, disappointingly vague.

That’s not to say that polls are useless, or that Wuthnow does not offer constructive suggestions. Polls and surveys are probably best at tracking very specific, narrow trends over time. They’re also stronger when paired with in-depth qualitative research. These kinds of studies are more expensive, and more limited in their aims. Wuthnow is right when he suggests, following a report from the American Academy of Political and Social Science, that “money for polls about religion would be better spent on a fewer high-quality polls than on more frequent low-quality polls.”

The results won’t be flashier. But the first goal of research, and the goal of journalism, should not be to produce generalizations. It should be, foremost, to chronicle the complexities and weirdnesses of life. And, only then, to start looking for patterns.

Also on The Cubit: A Conversation with Robert Wuthnow