The devil’s in the data: How rape culture shapes rape stats
By— February 11, 2013
If you follow debates about sexualized violence in the United States or elsewhere, in war or in peace, then you’ve probably heard at least some of the following statistical (or quasi-statistical) claims about patterns of rape: One in three U.S. women has been sexually assaulted. Seventy-five per cent of Liberian women were raped during the civil war there. Sexualized violence is declining (or increasing). Intra-military rape in the U.S. is down. Wartime rape is always a weapon of war. Unfortunately, none of these claims can actually be defended, at least not with numbers.
I have researched rape reporting and the interpretation of rape statistics for several years. I've learned a lot about the dynamics of rape reporting; in the process, I've come to see problems with rape numbers as a microcosm of our problem with rape. We ask the wrong questions. We make the wrong generalizations. We ignore complexity and uncertainty, and in doing so we privilege certain stories and erase others.
Asking the wrong questions
Rape statistics gleaned from police reports and other self-report sources are biased—usually not politically or intentionally, but in the statistical sense. In addition to dramatically undercounting rape, they overrepresent some victims and underrepresent others. We just don’t know which victims, or by how much.
One thing we do know: Despite an avalanche of media attention to every false claim of rape, the number of rape cases that go unreported dwarfs—to the point of meaninglessness—the number of false reports.
Let’s take the example of the U.S.:
A conservative U.S. Department of Justice 2010 survey estimate suggests there were about 270,000 rapes in the country that year. Fewer than 85,000 were reported to police. And, according to one study aimed at rooting out false reports of rape, “methodologically rigorous research” finds that 2 to 8 percent of reports are false. Even if the number of false reports were as high as 10 percent, we’d be talking about 8,500 false reports and 185,000 non-reports. Now think about the fact that rape is underreported on surveys as well as in police reports, and the pointlessness of complaining about false reports begins to emerge.
Survey data are less biased than self-reports, but they’re hardly airtight. A study published in 2009 found that changes in survey question wording led to a tenfold difference in reports of past-year rape victimization (as opposed to lifetime victimization), from about two per 1,000 to about 20 per 1,000 among U.S. college women. There are about 7 million full-time female college students in this country, so that’s the difference between 14,000 rapes and 140,000. (Police data, by contrast, showed 459 forcible rape cases at U.S. universities in 2009, a number I got by downloading the FBI’s Uniform Crime Report data and summing across all universities.)
Of course, the problem goes beyond not knowing how many reports there are and aren’t. We don't know which victims are underrepresented—and we often don't think to ask, Whose story is missing here?—which means we're prone to false generalizations.
Making the wrong generalizations
Even the best survey estimates probably represent a minimum number of rapes—so it takes a truly gigantic leap of misinterpretation to arrive at an overestimate. Harvard University Assistant Professor Dara Kay Cohen and I investigated Nicholas Kristof’s claim that 75 percent of Liberian women were raped during civil war there. Kristof, eager to show that rape was an emergency in Liberia, had mangled the evidence, generalizing from a study of sexual assault survivors to all Liberian women.
To be clear, rape was absolutely an emergency in Liberia. Depending on which survey you read, 20 to 40 percent of Liberian women report suffering some form of sexualized violence. The problem, beyond simple inaccuracy, is that Kristof's false generalization defined sexualized violence as “the problem” for Liberian women. Like many of our false generalizations about rape, that’s conveniently simple—but in painting women as rape victims, it erases bigger and more complicated stories about war, violence, displacement, and survival.
On surveys and via police, hospitals, and other sources, female victims report more frequently than male victims; the rich report more frequently than the poor. Better educated people are more likely to report, as are victims who suffer more serious physical injury from their attacks. Increasing reports of rape may represent increased rape—or they could simply represent a change in reporting dynamics. Maybe there’s a new female police officer. Maybe this year’s survey enumerators are better trained. Frustratingly, it’s almost never a good idea to make strong inferences from data on patterns of rape, whether those patterns are temporal, demographic, or geographic.
For example, in 2012, the Department of Defense invested about $15 million in its Sexual Assault Prevention and Response Office (SAPRO). In its 2010 report, SAPRO congratulated itself on a decline in reported sexualized violence between 2006 and 2010. But as others have noted, these results can’t be trusted. Leaving aside questions of political motivation, the 2010 survey had an overall response rate around 30 percent (just 15 percent among enlisted men, about 30 percent among enlisted women, up to about 60 percent of officers of both genders). Across several studies, the number of women veterans reporting sexualized violence ranges from 4 percent to 71 percent, depending on precisely how sexualized violence is defined and how the question is asked. I can think of a dozen reasons for that four-year decline that aren’t “less rape.”
It’s not that we can’t say anything about patterns. But it’s worse than useless to report numbers or patterns as if they were Just True. Our immediate question should be: Precisely where do these data come from? What reporting dynamics might be underneath the patterns we (think we) see in the data? Whose stories are missing, and why?
I've seen an uptick in overgeneralization from limited data recently, and it’s not limited to arguments about numbers or patterns. UK Foreign Secretary William Hague’s laudable initiative to fight wartime rape comes attached—less laudably—to the assertion that wartime rape has a single strategic cause. In his January 28 Huffington Post piece, Hague writes: “From Bosnia to Somalia, Sierra Leone to the DRC, and Rwanda to Libya, sexual violence has been used to terrorise and destroy communities…” and goes on to say that it is violence “used as a military tactic.” While rape and sexualized violence certainly terrorizes and destroys, the assertion here—that wartime rape is the result of military strategy across conflicts, groups, and times—is just plain wrong.
Ignoring variation in the patterns and causes of wartime rape means ignoring complexity, and blindness to complexity is precisely the opposite of what is needed in places like Syria, where the available evidence suggests that rape is a mix of small-group opportunism and a somewhat more organized detention-based torture tactic, and that different armed groups are perpetrating different levels and types of assaults.
We’re not going to make a dent in the problem by ignoring armed groups that don’t rape, or groups in which dynamics at the sub-group level (rather than top-down orders) play a major role, or by viewing all wartime rape through Bosnia-colored glasses.
The way we classify wartime rape can have an impact on how we hold people accountable, too. Calling rape a “weapon of war” implies that there is a top-down strategy involved, that a commander said, “Go forth and rape.” The reality, though, is that it doesn't usually happen this way—and that commanders who quietly tolerate rape ought to be held accountable, too. Thinking of rape as strategic—rather than as a witches’ brew of strategy, military socialization, small-group dynamics, command toleration, and battle trauma—won’t aid policies aimed at prevention. Nor, in the end, will it aid accountability. We could spend years searching for a smoking gun in a conflict like Syria's, while rapists and the officers who condoned their behavior go free.
What to do
None of this is to say that we can’t learn anything about numbers, patterns, or causes of sexualized violence. In a short 2011 book I co-authored with Francoise Roth, director of the Colombian NGO Corporación Punto de Vista, and Tamy Guberek, a colleague at the U.S. nonprofit Human Rights Data Analysis Group, we made several key recommendations. The most important, in my opinion, is that researchers of sexualized violence use multiple research methods, complementing numbers with narratives and vice versa. In fact, our numbers should have narratives. Having a clear sense of the origins of our data—knowing the survey questions, thinking through the sampling strategy, considering access to reporting mechanisms, looking for holes in the net—can protect us from the worst of our interpretive excesses.
Why does this matter? Obviously, getting it right is key to prevention and accountability efforts. Ultimately, the goal is less rape, not just more awareness, and the way we approach evidence affects the policies we put in place. On a more fundamental level, though, I am convinced that getting it right is a respect issue. Looking at rape data and asking the wrong questions (or overgeneralizing, or ignoring complexity) is akin to looking at a rape survivor and doing the same. I want to be sure we do better for the women and men who have already suffered enough.
For more on how researchers and the media struggle with stats on sexualized violence, click here.