May 13, 2013

FAQ: The Geography of Hate

Dear Readers,

Thanks to everyone (well, almost everyone) for their comments and constructive critiques on our Geography of Hate map. In light of all of the different directions these comments have come from, we wanted to respond to some of the more common questions and misunderstandings all at once. Before commenting or emailing about the map, please keep the following in mind...

1. First, read our original post. Second, read through this FAQ. Third, read the "Details about this map" section included in the interactive map, itself. We specifically spent time on these things in order to explain our approach, and they go into some detail about the methods we used. Nearly all of the critiques of our map are already included in one of these venues. We're happy to engage and confident in our methodology (not that any approach is perfect), but please, use the skills your first teacher gave you and take the time to read.

2. If you are offended by these words, and we sincerely hope that you are, remember that they are the object of a research project. As such, we felt compelled to reproduce the words in full in order to be as clear as possible about our project. While we agree that the use of these slurs can be hurtful to some, especially the groups that they are targeted at, we believe that there is a difference between including them as the object of our study and using them as they are 'meant' to be used.

3. The map is based solely on geocoded data from Twitter, and does not reflect our personal attitudes about a given place. The map represents real tweets sent by real people, and is evidence that the feeling of anonymity provided by Twitter can manifest itself in an ugly way. If you feel that the place you live is more or less racist than somewhere else and this isn't reflected in the map, please start a conversation with your community about these issues.

4. In order to produce this map, we took the number of geotagged hateful tweets, aggregated them to the county level and then normalized this count by the overall number of tweets in that county. This means that the spatial distributions you see for the different variables are decidedly NOT showing population density. As we mentioned above, this is clearly stated in all of the previously written material accompanying the map. And because we are specifically looking at the geographic patterns of Twitter activity, it makes more sense to normalize by overall levels of Twitter activity than by population.

Were that not enough, however, the fact that there is so little activity on the map in California - home to an eighth of the entire US population, including the cities of Los Angeles, San Francisco and San Diego - should be a clue that something else besides population is at work in explaining these distributions. While we share with the infamous xkcd cartoon a distaste for non-normalized data, just because you thought for a second that maybe it was relevant in this case doesn't make it so. There are many possible explanations for some of the distributions that you can see, and we don't pretend to have all of the explanations. But population just isn't one.  

5. This map includes ALL geotagged tweets for each of these words that were determined as negative. This is not a sample of tweets containing these words, but rather the entire population that meets our criteria. That being said, only around 1.5 % of all tweets are geotagged, as it requires opting-in to Twitter's location services. Sure enough, that subset might be biased in a multitude of ways when compared with the the entire body of tweets or even with the general population. But that does not mean that the spatial patterns we discover based on geotagged tweets should automatically be discarded - see for example some of our earlier posts on earthquakes and flooding


6. 150,000 is in no way a "small" number. Yes, it is less than the total population of earth. Yes, it is less than the number of atoms in the universe. But no, it is not small number, especially as it is the total population of the phenomenon rather than a sample (see #5). And were one to extrapolate out that, considering these 150,000 geotagged hateful tweets are only around 1.5% of the total number of hateful tweets, the actual number of tweets (both geotagged and not) containing such hateful words is quite a bit larger. Regardless, we think that 150,000 is a sufficiently large number to be quite depressed about the state of bigotry in our country.


7. Furthermore, given that each and every geotagged tweet including the words listed was read and manually coded by actual human beings (if you consider undergraduates to be human beings!), rather than automatically by a piece of software, 150,000 isn't an especially small number. For students to read just these 150,000 tweets, it took approximately 150 hours of labor. This isn't insignificant.

8. The original lists of words included were derived frohttp://en.wikipedia.org/wiki/List_of_ethnic_slurs and http://en.wikipedia.org/wiki/List_of_LGBT_slang and included the following words:

bitch
nigger
fag*
homo*
queer
dyke
Darky OR darkey OR darkie
gook*
gringo
honky OR honkey OR honkie
injun OR indian
monkey
towel head
Wigger OR Whigger OR Wigga
wet back OR wetback
cripple
cracker
honkey
fairy
fudge packer
tranny

A * indicates a list of lexeme variations was used, which accounts for alternate spellings of words. For example, "fag" was not just "fag," but also "fags", "faggot", "faggie", and "fagging", among other things. All geotagged tweets containing these terms were examined. All tweets that were not used in a derogatory manner were discarded during coding, and as a result some words no longer achieved a minimum number to be displayed on the map. For example, honky/honkey/honkie was discarded, as most of the tweets were positive references towards honky-tonk music and not slurs aimed at white people.  

In the end we were also constrained to words that could be manually coded, and words that could not. For instance, the 5.5 million tweets with reference to "bitch" were excluded from the list. Students were paid roughly $10 per 1000 coded tweets, and therefore including the word "bitch" alone would have cost roughly $55,000 to manually check for sentiment. Tranny/tranney would have been under $200. While we're obviously interested in including a wider range of hateful terms in our analysis, our research funds, and thus the scope of this project, are extremely limited. It's not like we have billions of dollars in funding lying around. If you feel strongly, feel free to donate to http://humboldt.edu/giving. and enter "The Geography of Hate Project" in your comments.

9. If you are a disgruntled white male who feels that the persistence of hatred towards minority groups is a license to complain about how discrimination against you is being ignored, just stop. You can refer to all of our previous commentary on this issue from November. Though we have typically refrained from deleting asinine comments to this effect - those who choose to make these comments do more to prove themselves to be fools than we ever could - we fully reserve the right to delete any and all comments we believe to be unnecessary.

36 comments:

  1. Thanks for the information, it is insightful to get more detail on what actually went into the analysis. I am not seeing anything that talks to retweets, was that tracked in your analysis at all? Was a retweet of a hateful comment evaluated for a change in meaning? For example, a retweet that quoted an offensive tweet but derided the comment or asked for an explanation would not necessarily be offensive despite containing a quote of the original offensive tweet. Or was this just not common enough to deserve consideration?

    ReplyDelete
  2. See point #7: the students read every tweet used in this analysis. A simple retweet of the offensive comment (without changes) would be rated as negative. If somebody derided the comment or changed the meaning it would be considered positive and not used for this map. Often Re-tweets would add an additional offensive word like "don't call me a nigger, faggot" which would be negative for 'fag,' but not for 'nigger.'

    ReplyDelete
  3. Would you mind telling us what the national base rate of hateful tweets to all tweets is (not the average of the ratios, but rather, (sum of all hate tweets across the nation) divided by (sum of all tweents across the nation), and what the mean and standard deviation of hateful tweets (across all counties) and all tweets (across all counties) are? It would be nice to have a little cardinal information, in order to think about the meaning of the maps in a clear way.

    ReplyDelete
  4. Ironically the public has little issue with the sample size of the Gallup polls they see on the nightly news, which typically have a +/- 3% error rate and survey 1500 people (http://www.gallup.com/poll/113980/Gallup-Daily-Obama-Job-Approval.aspx). For the geography of hate map 150,000 tweets were sampled at a rate of 1.5% if all geotagged tweets are captured. That means the margin of error is +/- .23%.

    It is fair to criticized the fact Twitter is a biased sample of the public, or even that geotagged Tweets are a biased sample of Twitter itself. Criticizing the sample size though isn't by any research standard. Twitter reflects Twitter and is something that is important to remember, but is very much the way the research is framed here.

    ReplyDelete
  5. I think the heat map is the wrong map at the national level. It is showing the sum of the averages of all counties within some number of miles. That means that the national map is heavily weighted towards areas with geographically small counties. Note that you can pick out the cluster of small counties in both Idaho and east of San Francisco in California.

    More seriously, it does not seem that you are accounting for the sample sizes of each county. There should be more variation in small population counties. That means that the most red counties will be almost all small population counties. As evidence of this I checked out the data for the first homophobic word by looking at the source data. The 10 counties with the smallest rate above average all have populations greater than 170,000. That puts them in the top 12% of county population.

    I also think 150,000 is small in this context. Given 3,000+ counties that's 50 per county. The median county population is about one-fourth of the mean county population. So, that should be about 13 or fewer for half of the counties. That's not even considering the tweets that were not deemed negative or that you are looking at 10 different words.

    I also wonder about the effects of non-English speakers. Could you throw out tweets that are not in English somehow. Or maybe search for similar words in other languages.

    ReplyDelete
  6. Hi Monica, I love your maps!

    I was wondering (and I believe PB addresses this a bit), how easy is it for a single hateful person to throw off the measurement for a small county? Did your students only use one tweet per account or were multiple tweets allowed per person? Was there anything that controlled for this? I am concerned, like PB, that the smaller counties are naturally more vulnerable to single malevolent individuals, whereas those people would be drowned out in major cities.

    Bias aside, I'm loving this map, especially when you zoom one notch further in than the default. It's interesting to me the pocket of hate that seems to exist in northern Vermont / New York State.

    I know that making a geo-map is extremely difficult. Use non-normalized and you are making a map of US population (we've all seen that XKCD comic!. Normalize it and you are creating inherent biases toward larger or smaller areas. Try to account for this and you are introducing your own biases. Sometimes it seems like you just can't win with geo-heatmaps. And those big counties out in the West! They can make it seem like there is 1000 square miles of pure hate, when it could really be one hateful jerk and 10,000 cows. You've done a fine job here and even though there may still be some biases, there is legitimate information to be learned from this map.

    ReplyDelete
  7. Excellent. Would be most interesting to add anti-Semitic terms to the map--another long-enduring legacy of hate.

    ReplyDelete
  8. I'm fairly concerned that this has already devolved into a map of Badthink.

    Let's recall that the First Amendment EXISTS primarily to *protect* speech (and by implication) thought that we dislike or are uncomfortable with. It holds as true for racists and homophobes as it does for 99%ers "speaking truth to power".

    One can be a hateful, bitter person toward any ethnic, gender, social, or demographic group for whatever arbitrary reason...but if it doesn't impact one's treatment of others around them, then one is entitled to be left alone and not harassed for their beliefs.

    ReplyDelete
    Replies
    1. If I may address this as someone who's not affiliated with the project...

      This really has nothing to do with the first amendment, simply by virtue of the fact that it is not a governmental project. The Constitution of the US defines and limits governmental action and power, not private action and power.

      Beyond that (and perhaps more importantly), it is also not in any way attempting to legally prevent people from saying/tweeting what they will.

      —and Twitter itself is a private company, and thereby not obligated to allow its users to say whatever they want (but they do allow almost anything, as you know). See above re: the Constitution.

      As for the statement that "...one is entitled to be left alone and not harassed for their beliefs," quite true, quite true. The map doesn't provide any identifying information about individuals, however. The people who tweeted the flagged messages were not personally contacted or harassed in any way.

      I do quite understand where you're coming from with all of this. It's good to be conscientious of rights, and good to be thoughtful towards folks of all kinds.

      ...but one might easily point out that freedom of speech includes the ability to comment upon other speech. The US is not a nation of monologuists. Conversation is the real goal, and this map/project is a part of that conversation.

      Delete
    2. Well said James. Objective analysis of facts (however insular, ie Twitter using public) is relevant to discussion.

      The thing I find most disturbing about this is my belief that Twitter is, for the most part, a young person's tool. These are supposed to be our enlightened upcoming generation and we're still seeing this? Just sad is all.

      Delete
  9. I'm curious to know if you baselined your sample against the level of "hate tweets" from, say, last year at the same time? The original hypothesis came from a personal observation, but how can we know if the rates shown on your map are the norm or a true post-election spike?

    Thanks!

    ReplyDelete
  10. Is it possible to get a list of numbers for the counties by word. I think this is excellent data but I am not convinced that a map is the best way to represent it.

    Because the west is so empty is distracts from the other real differences present in the data and makes the similarity to population densities falsely apparent.

    ReplyDelete
  11. You should have called the original post "Hate Map".

    This is my first encounter with floatingsheep, but it looks like there are some interesting data science issues being discussed here, so I plan to be back even when you're not discussing hot-button topics.

    ReplyDelete
  12. Hi Monica,

    Really fascinating. I wanted to ask though-- how were you able to pull location data from the tweets? I understand that twitter *sometimes* has longlat data attached to tweets for those who enable it; other tools pull general city/state info from the "about" section.

    ReplyDelete
  13. As someone involved in the dog world, I can confirm that dog breeders not only use Twitter, but they commonly refer to female dogs as "bitch" - as in, "We lost a beautiful 3 y.o. bitch when she was hit by a car." So I don't blame you for excluding this word from the count - that's a lot of tweets.

    ReplyDelete
  14. Very interesting! Thank you for taking this on.

    A couple of thoughts. I apologize if they have already been asked and answered. If so maybe you could point me to the answer.

    1 - Why was the word "retard" left off? When it comes to disability hate terms, "retard" is at the top of the list.
    2 - Could the focus of hate tweets coming from the East side of the US be an indicator that Twitter as a tool is used at a higher rate in the part of the country?

    ReplyDelete
  15. Hi Monica, as mentioned in this MeFi thread (http://www.metafilter.com/128081/Hate-Map#4979364) I think there may be a problem with the Google Map visualization. The vis appears to aggregate by area, but counties are closer together in the Eastern half of the country, so even if the "hate hot-spots" were distributed randomly you would still expect to see that Eastern bias at the national zoom level.

    I would also be curious to see what the map looks like using some notion of statistical significance - you can test for significant enrichment of hateful tweets using a Fisher's exact test or chi-squared, and then plot some type of corrected p-value (or false discovery rate) instead of the raw enrichment ratio. That should take care of the "small county being driven by a single racist" problem.

    Interesting work, though, and I'm looking forward to reading the paper.

    ReplyDelete
    Replies
    1. Patrick's point about the size of counties seems to pretty much nail the reason why the "hate map" looks exactly like a map of 1/county size. I think this is a really interesting exercise, so I'm looking forward to version 2 of the map.

      Delete
    2. An easy to compute notion of significance would be to divide each county's total number of geo-coded tweets with a key word by the square root of the total number of geo-coded tweets from that county. You could then measure how many standard deviations from the mean the sample from each county is. This should level the playing field for small and large counties.

      I'm assuming we are viewing all of the geo-coded tweets from a county as a random sample from all tweets (thoughts?, messages?) from that county. The number of tweets with a key word then has a binomial distribution.

      Delete
  16. So what's the total number of geotagged tweets (not just the rude ones, I mean the total population)? I'd be interested to know what percentage of tweets contain some kind of hate speech.

    ReplyDelete
  17. Sorry if this has been addressed in one of your earlier FAQs somewhere, but I'm wondering whether the tweets are read by the students under "blind" conditions. Which is to say, were the avatars, usernames, and geographic information viewable by the students who were evaluating the written content of the tweet, or were these stripped off?

    I mean, a reader's judgement-call as to whether a Tweeter's use of a slur was "truly hateful" or "just sarcastic" could possibly be influenced by subconscious bias based on what the user's avatar looks like, or where the user is Tweeting from.

    ReplyDelete
  18. @Al:
    The thing I find most disturbing about this is my belief that Twitter is, for the most part, a young person's tool. These are supposed to be our enlightened upcoming generation and we're still seeing this?

    Good point, Al. But I'd also note that it's characteristic of many young people to use taboo words for their shock value -- and in 21st century American speech, "faggot" and "nigger" are much more shocking than most of the terms in George Carlin's famous list (possibly the word rhyming with "runt" can still be called highly taboo, because of its use by misogynists).

    So, if a lot of young Twitter users are throwing around hateful slurs like Mardi Gras beads, one possible reason is that "fuck" doesn't upset the grown-ups like it used to.

    ReplyDelete
  19. Thanks for this fascinating map. It is disappointing that Palmyra VA, where I teach, is such a hotbed of n-word tweets. I would expect some to come from vernacular use by African American students, as I see this a lot, but Palmyra has a much lower percentage of persons of color than, say, Charlottesville and Richmond, so it is extremely disproportionate. Regarding excluding data, please note the spike in the word "dyke" just north of Earlysville (which is just north of Charlottesville). There is a perfectly logical explanation for this; there is a small town there, named "Dyke."

    ReplyDelete
  20. This is a fascinating study! My question is was there a distinction made in the use of the "N" word from a white person as opposed to black person's use of this word? In essence; used by a white person it would be hateful and used by a black person it would not.

    ReplyDelete
  21. Way to make yourself and others think you're making the world a better place.

    ReplyDelete
  22. I am interested in the research, and I think it is valuable. However, as others have mentioned above, I think methodology might might be suffering slightly from the Modifiable Areal Unit Problem. Since counties are smaller east of the Rockies, its more likely that clustering appear there using your current methodology. One way you might try to get around this problem would be to overlay a grid and calculate the variables of interest for each cell. This would help standardize scale.

    ReplyDelete
  23. I second what other people have said, "Retard" should be included in the ablism side. Also, if the titles are "racist" and "homophobic" please use "ableist" to keep it consistent. "Disability" is not an "ism."

    I am really glad that you included ableism though, most people overlook it. PLEASE add "retard" and "retarded" though.

    Also, the current maps looks like a population map (as others have said). What if you did a hate per capita map? Wouldn't that be more telling?

    ReplyDelete
  24. I understand the realities of financial limitations regarding the end result, but just in general I find it highly questionable that you had excluded outright terms not deemed on the wrong end of the euphemism treadmill. Hate can be thrown around just as well without "hateful" words.

    All this map really tells me is that the bigots in some parts of the nation apparently prefer "fag", "homo" and "queer", while others probably prefer "fairy", "sissy", "fudge packer" or just plain old "gay".

    Anyway, terminology-wise I'd also prefer "ablist" (and "orientationist" while I'm at it) and I agree that this really, really needs "retard" added at the very least (which unlike "cripple" and to a degree even all of the other terms listed really can't be used in a nice way other than calling out others for using it).

    ReplyDelete
  25. My second thought when looking at this map (after "Whoa") was definitely "Why isn't retard on here?"

    ReplyDelete
  26. The hover statistics don't seem to be working right now. I tried it in 3 different browsers (IE, Firefox, Chrome) and was unable to get anything except intermittent popups and only when it was on the default setting.

    ReplyDelete
  27. Either someone in The Dalles, Oregon really doesn't like dykes, or you missed the fact that Google has a big server farm at that location and somehow a bunch of Twitter emissions go through that point as well. You might want to check it out.

    ReplyDelete
  28. I'd also point out that a sizeable subset of the gay population now uses "queer" as a word to describe themselves, in the way that blacks will use "nigga" with each other.

    ReplyDelete
  29. I'm not seeing a response to the sample size critique that the methodology is biased against rural areas.

    Here's a map of the 234 counties that are above average in at least 6 of the 10 words: (http://batchgeo.com/map/99189fa08dbe583c21d0312938df9023)
    That point of view certainly makes it looks like the words are much more common in heavily populated counties.

    I've got a more detailed explanation here: (http://statexamples.blogspot.com/2013/05/a-map-called-geography-of-hate-seems-to.html)

    I would be interested to see a map where you divide by the square root of number of tweets in a county instead of the number of tweets.

    I also would be interested in data as to how many counties have 0 or 1 instance of the words.

    ReplyDelete
    Replies
    1. "I'm not seeing a response to the sample size critique that the methodology is biased against rural areas."

      Don't hold your breath. Statistical rigor is not high on the list for these guys, would be my guess.

      Delete
  30. Of course population has something to do with it! No, it probably isn't everything, but you can't just say 'oh, it's not a factor, trust me'! The reason California has few tweets could be MANY reasons. More black live in the east and south, that could be one. But even if this doesn't explain it, one thing is absolutely clear, all things equal you will get more tweets in places with more population. Now, maybe cities are more or less likely to engage in this behavior. But we don't even know because it's not adjusted for population. It's fine to not put it if for technical reasons, or anything really. But to say it doesn't have to do anything with population and that's why it's not included, that's just bad science.

    ReplyDelete
    Replies
    1. When you accuse something as being "bad science" please remember that science is an iterative process. Statistical rigor requires sampling methodologies (raw data) that can support more advanced analytic techniques. Real science does not happen like on CSI. It's not as simple as clicking the "factor out population density" button. I applaud the authors for making a brave step that they undoubtedly knew would get some of that hate thrown that way.

      Delete