Ad verba per numeros

Tuesday, August 13, 2013, 11:48 PM
Update December 16: If you are interested in something more "academic" you may find of interest this paper by me that is available for free here.

Update August 15: I was wrong when saying that DiGrazia et al. should have cited Morstatter et al. I warned them on April 25 about that paper but by then they should have already submitted to ASA 2013 and, therefore, their final paper could not have made reference to the work by Morstatter.

Prior warnings:

  • This is a lenghty post, I'm sorry. I've done my best to structure it properly but it touches plenty of intermingled topics and, besides, it has been written in a hush (paraphrasing Pascal, if I had more time, I would have written a shorter post).
  • I hope none of the people mentioned here will have hard feelings because of this post; you know, it's just academia, nothing personal.
  • The tone is sometimes humorous (or sarcastic depending on your inclination).
  • The post tackles with the following topics: (1) a comment about the paper that DiGrazia et al. presented at the Annual Meeting of the American Sociological Association in New York on August 2013; (2) some personal digressions on the use of "catchy" titles in academic papers and the possible unintended outcomes of such practice; (3) more personal digressions on press releases and handling the press when covering academic research; and (4) a comment to the open editorial published by Fabio Rojas in Washington Post tangentially related to the paper mentioned in (1).
Said that, let's the post begin!

In case you don't know my work I've earned kind of a reputation as someone that tend to criticize anyone claiming they has been able to predict elections using Twitter data (i, ii, iii, iv and v). There is a long and twisted story behind that reputation but, at this moment, let's reduce my role to that cliche: my only line is "No, you cannot predict elections with Twitter".

To clarify that point a little, up to now, nobody has tried to predict elections using Twitter data (except for this person and this people), everybody has shown that they could have predicted elections. In other words, researchers have collected tweets related to a given election, they have applied to them some method and after the election they have compared their results with the actual outcome and said eureka!

Hence, one of the facets of my role in this academic "drama" is that I ask researchers to predict elections before they are held. It seems pretty simple, right? Nevertheless, virtually every paper published up to now is a post-facto prediction which, obviously, cast doubts on such results; not only because authors can inadvertently incur in data dredging but also because those authors not achieving positive results are not publishing their papers and, hence, the public perceives that predictions are possible when they may not.

I think such a requirement is sensible for a so-called prediction but anyway it is mostly ignored and some authors argue that their paper is not actually dealing with predictions to skip such a requisite (BTW, in Twitter conversation this argument has been used by one of the authors of the DiGrazia et al. paper but they are not the only ones).

However, just in case someone is paying attention: You, Have, To, Predict, In, Advance. If you don't want to follow my advice follow that of Lewis-Beck (2005): "the forecast must be made before the event. The farther in advance [...] the better".

Once clarified my position (you cannot predict) and the worst sin of virtually every paper on predicting elections from Twitter (they make postdictions and not predictions) what's the matter with the work by DiGrazia et al.? Why should I comment it appart of fulfilling my role of Jiminy Cricket?

There are a number of reasons for doing it.

First, it is a nice paper, with issues as any other paper; however, if such issues could be somewhat solved or reduced, it provides evidence for the hypothesis most of us were working on without even cast a doubt on it; namely, that Twitter actually provides insight (in some distant way) to the position of the public regarding elections.

Second, it is a perfect example of how a team of researchers can be involved through misfiring catching titles and unfortunate press releases in an uncomfortable position while trying to balance what they paper really says with what they seemed to have said to some journalist and, hence, the public believes is scientific evidence. It must be said however, that in this case most of the harm is due IMHO to the title of the press release (not the fault of the authors of the paper) and one unfortunate open editorial authored by only one of the authors.

In fact, this op-ed is the third reason to discuss the paper. That op-ed promises a lot on the basis of really thin evidence but cites anyway parts of the paper for support, therefore confusing the public by mixing personal opinions with biased excerpts of scientific evidence. From my point of view this op-ed is the most interesting aspect of this story since it's the first time I cannot point a journalist for having misinterpreted a researcher but it is the researcher himself the one "twisting" his work to picture their opinions in a much more attractive light.

The paper of interest is "More Tweets, More Votes: Social Media as a Quantitative Indicator of Political Behavior" by Joseph DiGrazia, Karissa McKelvey, Johan Bollen and Fabio Rojas. As aforementioned, the paper was presented at ASA 2013 and you can find a draft of the paper (slightly different from the final version) here. For the final version please ask the authors since the copy I have got was sent to me from someone, not the authors, and haven't found it online.

However, before proceeding with my review of the paper I would like to explain how I came to it. Quite simple, I was curious because of the title; in fact, I was curious because the first part of the title: "More Tweets, More Votes".

If you are a connoisseur of academia you know that attention is everything; attention get you cites, coworkers, program committees, etc. You know the proverb, if a paper is written and nobody reads it, does it make any impact? Hence, authors try to get the attention of potential readers from the very beginning, that is, the abstract is important but the title is key. This paper has a really catchy title; indeed, catch and clever since it serves as an abstract, it's memorable and nicely fits a tweet. This paper has got a title which is a "winner".

Because of the title I supposed that the authors were using the most simple approach to predict elections: that is, counting the number of tweets for a candidate, take them as votes, compute the vote share, et voilà!

From that supposition I read the paper (around April) and found it rather different from my initial guess. While wearing my referee hat I found it to have some flaws but with also with potential.

The title, however, was a problem: it promised much more than the paper provided and I guessed it was going to be a headline attractor (as it has been).

Please do not misunderstand me: I'm a great fan of catchy titles for academic papers. I could be very capable to introduce myself a-la Troy McClure:

Hi, I'm Dani Gayo. You may remember be from such papers as "I wanted to predict elections with Twitter and all I got was this lousy paper" or "All liaisons are dangerous when all your friends are known to us". (BTW, both are real papers by me).

The problem with catchy titles is that the paper has to be able to stand for the title. With this paper you have a feeling not entirely unlike what you have after reading "The Neverending Story": it's nice, but it does not fulfill the title.

Before you forget this post to read the paper I'll provide an spoiler: the authors looked for a correlation between number of tweets for a candidate and the margin of difference between that candidate and his or her opponent. They found the correlation to be significant and positive which means that, in general, the larger the number of tweets the larger the difference in votes you should expect.

Unfortunately, the truth is that you cannot readily apply that finding as a rule of thumb to compute the expected vote rate or vote difference from the tweets. That's why the paper does not stand for the catchy title. Is that a sin? No way, in fact it was, at least for me, a glad surprise and, indeed, the final version of the paper is quite interesting. Nevertheless, the title is far more sexy than the paper but I'm repeating myself.

In the final version of the paper the authors made a great effort to control for any confounding variable such as incumbency, demographic features and so on. The good news is that when controlling for all of those variables the correlation is still significant, the not-so-good news (my point of view) is that the correlation between tweets and votes is really small when compared with other factors (such as incumbency, for instance).

As a researcher I'm reasonably excited with the good news: it means that Twitter is no inane chatter, tweets tend to generally serve as a proxy for public opinion. The problem lies in the not-so-god news: it means it is really hard to make any sensible prediction and, let's not forget, we humans want to predict the future and not simply to know that it is predictable in theory.

What's the reason why sometimes tweets can reflect the actual vote and sometimes not? I think a plausible explanation is that the Twittersphere is far from monolithic and that depending on the candidate, even on the topic, one tribe or another is going to dominate the Twitter space and you will never know which tribe is responding. You can find much more on this somewhat devastating report from Pew Research.

Those are the main worrisome aspects of a paper that is perfectly nice but promises a lot to then fail the reader by not fulfilling its promises. Needless to say, in addition to that, I have the common petty comments any referee would provide (please note that many of these refer to the first draft of the paper, BTW, I exchange some comments with one of the authors, Fabio, and this makes writing this post much more complicated for me from a personal point of view):

  • The paper does not completely cover the prior art which, at this moment, is still reasonably small to be cited with detail. The following material could provide complete references to that prior art: [1], [2].
  • Much more detail should be provided regarding the dataset, specially the way in which tweets related to a candidate where obtained (keywords used and date ranges), and how topicality was warranted.
  • Since the paper has been published after the work by Morstatter et al. (2013) the authors should at least acknowledge that the representativeness of their sample is unknown (since they have not used the Firehose). Indeed, the problem of not having access to the Firehose and try to justify the gardenhose as a representative sample is a bummer for all of us from now on. A clarification on this rectification can be found here.
  • The authors take into account both positive and negative tweets under the argument that "all publicity is good publicity", is a really weak argument and there have been a number of papers showing that tweet-counting is a subpar approach. In the final version of the paper the authors mentioned the Pollyanna effect which, in total honesty, I do not think has been properly interpreted by them to support their argumentation. In other words, there are no actual argumentation to support that negative tweets are a good thing for a candidate.
  • In the first draft there are a couple of scatter plots that, in my opinion, if anything they only help to prove my point (i.e. that the number of tweets does not actually mean anything for a given concrete candidate). In this regard I'm pointing to this post in The Monkey Cage that makes a much better job than me explaining the problems with that scatter plot and the interpretation the authors made of it.
  • In addition to that, performance in electoral prediction cannot be evaluated by telling how many elections one has correctly guessed (even when discussing a highly polarized system with only two major parties like the US). It's much more important to know the share vote, specially in disputed elections and/or systems with more than two parties (e.g. most of Europe and other countries in the world). Again, it's not me who says this but Campbell (2004).
  • Finally, my common warnings about Twitter user base not being representative of the population, self-selection bias, spam, propaganda, lack of geolocation of tweets, etc. apply to this paper.
So, in short, the paper is nice, it's a honest approach to the problem with some new ideas. If everything in the dataset is right , and this can be replicated with other comparable datasets, and/or data from other countries, and/or data with better preprocessing (eliminating bots and spam for instance), then their results would be of interest for researchers since it would imply that Twitter data means something (although we worked on that basis all of the time).

Unfortunately, the results are no ground breaking and, no, given a race more tweets does not imply always more votes, it's much more complicated than that.

That for the paper. Let's talk about the press release.

As I said before, the paper had a press release issued by the American Sociological Association which you can find here.

I don't know how usual or unusual it is that ASA issues a press release for an article in a conference but it certainly helped to spread the word about the paper making a subtle twist to its main conclusion. The press release was issued with the following title: "Study finds more tweets mean more votes for political candidates".

If you are not used to press releases by research organizations I'll tell you; they are interested in what I call "sexy research". Cures for cancer or AIDS apply for sexy research, "striking" or "bizarre" results are also sexy research, social media is currently sexy research in computer science (famous for being unsexy, unless you work in computer graphics or robots).

Having had my fair deal of interviews with journalists and issuing press releases I think I know a little about them. First, press releases (and articles about science) are usually written in a rush, they must be brief, they must be catchy, they must be to the point, and they must show that the research is useful for the public (aka the tax payers, aka your funding source).

If you read the press release regarding this paper you can find that it describes the main findings of the paper but from a rather optimistic point of view. This applies not only to this press release but to any press release regarding a piece of research. It is somewhat comparable to listing a house, you don't say "small", you say "cute"; you don't say "falling apart", you say "great potential"; you don't say "old", you say "with character". All of this make up is usually done by the people in the press office who know that we, researchers/scientists, are "too shy to sell our work"; they are there to simplify our explanations, make them more mundane, more close to the public...

If you compare the writing of the release with the quotes from the authors you can see that the authors are much rather cautious in their statements. This is also common since we, researchers, do not like to make bold statements (except in catchy titles) and usually this quotations are obtained by the people in the press office by making questions (which are trying to obtain a catchy claiming in line with their prejudgment of the piece of research).

If you have gone through this you probably understand the nightmare it is, the jokes from your colleagues once the press release is issued, the rants in the comments in the online newspaper... If you haven't you are lucky (lucky but with unsexy research, sorry).

Now, let's suppose you are a journalist going through a number of press releases. What would you think of this one? "Wow, Twitter can predict elections!" By now the snowball is rolling and, therefore, the paper is covered in plenty of newspapers without even contacting the authors most of the time, they will publish the press release almost verbatim and the mantra "Twitter can predict elections" is again on track...

At this moment, I should enter scene and do my routine.

However, this time was different (I had hoped): (1) I had already tell the authors my point of view, (2) I'm really tired of telling once and again what I see as obvious (that predicting elections is NOT that easy) and, (3) there is no point in trolling thousands of journalist worldwide plus hundreds of thousands of tweeters wowing "Hey, Twitter can predict elections" (BTW, I have not ever trolled journalists).

Nope, I'm water my friend, I'm flowing...

I was flowing until I reached an open-editorial by Fabio Rojas, one of the co-authors of the paper, published in Washington Post and titled "How Twitter can help predict an election".

After reading it I simply got frozen. I got frozen because I profoundly disagree with that piece, not actually because of the content but because of the unfortunate timing and (mis)use of a piece of research to support personal opinions about a future that may simply not occur. I also got frozen because I had a gentle exchange of e-mails with Fabio regarding their paper and I'm not a bully nor a troll.

Hence, Fabio, this is no personal, just business.

As I see it the op-ed is a piece of opinion about a plausible (albeit rather distant) future usage of social media. If it simply was that, I would not have any problem with it but Fabio cites the paper where he's coauthor to provide support for those opinions and, in fact, although some facts are extracted from the paper, the paper as a whole does not provide any evidence to support the claims in the op-ed.

Those facts taken from the paper are provided without context and cherry-picked so the reader (the public, the tax payers, the source of funding) are mislead to think that performance is much better than it really is and that predictions are easy to made.

The terrible "all publicity is good publicity" mantra is used again. The problem with that is that virtually every reader can think of counterexamples against that line of argumentation and, hence, put the author (and the research he undertakes) under a poor light.

It is because of all this that I think that this op-ed has been unfortunate at least and potentially harmful for the field of computational social science in general and electoral prediction based on user generated content in particular.

Finally, if you have reached the end of this post you may wonder what's my opinion on this matter. Can or cannot elections be predicted from Twitter data? If you aim to use the methods reported in the literature up to now, no, you cannot. However, I'm mildly optimistic and more work is needed (specially regarding adversarial scenarios, i.e. those where candidates are aware that Twitter data can be used to measure their potential). Needless to say, "I dunno, we need more work" is not sexy and does not make headlines.

To conclude, I'd like to provide a short summary for all of the topics I covered in this post:

  1. I rather liked the paper by DiGrazia et al. It has issues but which paper hasn't. The title is sexy and otherwise I would not have read it so, although problematic, it was an overall good decision.
  2. Use catchy titles with caution. Think of them attached to your resumé before using them. Think of potential consequences, specially when read by press offices and journalists.
  3. Be extremely cautious when writing a press release or working with people from the press office to issue one. Be sure they do not distort your work.
  4. The op-ed by Fabio was, from my point of view, a misfortunate event because of timing and because of mixing personal opinion with cherry-picked scientific evidence. It is hard for the non expert telling apart a scientist's opinions from hard facts, everything is "scientists say". Hence, when writing for the general public be even more cautious and always be clear and make distinction between facts accepted by the whole community, findings in your work (which may be disputed by other authors) and your personal opinions (driven or not by your findings).
As usual you can find me at Twitter: @PFCdgayo.


Back Next