Wired Campus icon

Previous

CUNY Social Network Mixes Scholarship With Facebook-Style Friendship

Next

How the Gates Foundation Will Spend Its Education-Technology Dollars

September 30, 2010, 06:05 PM ET

Separating the Truth From the Truthy

Chronicle of Higher Education

A new Web project out of Indiana University is separating the truth from the "truthy" in political Tweets online.

The project—named "Truthy," after Stephen Colbert's descriptor for misinformation dressed up as fact—mines Twitter to analyze patterns in political discussions and makes the information available online. The software allows visitors to take a closer look at Twitter trends to spot data manipulation by tech-savvy special-interest groups.

"We're trying to study how information propagates online through social networks, blogging, and social media," said Filippo Menczer, associate professor of informatics and computer science at Indiana, who is leading the research. Truthy, he said, attempts to answer the question, "Can we put together our understanding of complex social networks and crowdsourcing to automatically detect the spread of misinformation?"

Through the site's database of Twitter memes—ideas passed from user to user by retweets or mentions, often signified with hashtags like #TeaParty or usernames like @LadyGaga—visitors can track how often a meme is mentioned by how many unique users as well as the volume of tweets over time. Truthy also checks tweets against a list of words and word clusters compiled by psychologists to determine a tweet's mood. Truthy can tell, for example, whether users are hostile or supportive of #GOP at any given moment.

Visitors are also asked to identify memes that appear to have been tampered with by flagging them as "truthy." This crowdsourcing tool will allow users to see where misinformation is being spread and will eventually help researchers develop algorithms to automatically identify Twitter abuse by identifying "truthy" behavior. "We're interested to see if we can get reliable data from crowdsourcing," Mr. Menczer said.

The site has also introduced a new way to visualize memes as they emerge and grow. Maps of twitter memes—like #GOP, pictured above—show interactions between individual users (represented by black dots) as they connect via retweets (the blue lines) and @replies (the orange lines). As various user clusters spring up around a topic, their activities are mapped by the Truthy software. In the #GOP example, Mr. Menczer said two distinct clusters—conservative and liberal—have emerged with each clusters' users parroting members of their group with retweets and talking about members of the other cluster through mentions in @replies. That's why you see two dark blue clusters connected by a matrix of orange lines.

Truthy was inspired by a study on Twitter bombs—floods of messages by a small group of users to manipulate trending Twitter topics—done by a researcher team at Wellesley College earlier this year. Accoriding to researcher Pagiotis Takis Metaxas, a professor of computer science at Wellesley, the study looked at the tactics of a Conservative group that manipulated trending topics on Twitter on the day of a senatorial election in Massachusetts in January. The study found that the political group started nine Twitter accounts just days before the election and monitored Twitter users who were posting about the upcoming election. On election day, the group sent @replies with disparaging comments about the Democratic candidate and a link to the group's Web site to 60,000 users before they were shut down by Twitter administrators. The Twitter bomb not only spread misinformation about candidate Martha Coakley, but the tweet also appeared in Google search results since the search engine was displaying real-time messages online at the time. "As we were looking through the data, we suddenly realized that there was some pattern that was trying to influence Google," Mr. Metaxas said.

Although he does not believe the bomb had a major impact in the election outcome, Mr. Metaxas said the group's attempt to influence Twitter and Google search should make internet users more wary of what they read online. "People who are not very astute to the way these things work may get confused," he said. "We need to have very good critical thinking skills to survive."

While Mr. Menczer said the Truthy format is not yet set in stone, he hopes that the Web site will stop political groups from "abusing the system and spreading misinformation."

"Ideally," he said, "this will disincentivize the abuse."

Comments

1. dr_grandma - October 01, 2010 at 08:07 am

Any way to get some more results of this study? I'd like to follow it.

2. crunchycon - October 01, 2010 at 09:59 am

It appears they are concentrating on conservatives -- all the hashtags mentioned are. In my experience following the language of politics over the last 30+ years, smear campaigns more often originate on the OTHER side of the "aisle".

3. ldoll - October 01, 2010 at 10:14 am

In my experience, smear campaigns are more likely to originate with conservatives...a poor bunch of losers if ever there were some. I give you "gun boat vets against Kerry" and the lovely smear campaign against McCain, using his service to his country AGAINST him (which I personally thought was a new low).

If conservatives don't win outright, they spend their time trying to sabotage the agenda, until they are back in power. There is a reason that the electorate is getting sick of "politics as usual"...

I'm personally mortally tired of the blazing ignorance and "spin" of modern discourse...and welcome anything that might bring out more truth and "light" into the public arena.

4. beprepared - October 01, 2010 at 10:45 am

And who will serve as the judge?

5. bevfreeman - October 01, 2010 at 02:00 pm

This is really great. I hope this initiative has a long life. Twitter has confused discourse. Also, there is no accountability because people and their qualifications are hidden behind hashtags.
Please keep us apprised.

6. rgregory - October 01, 2010 at 02:51 pm

@beprepared -
The judge of what? The engine simply measures conversational tone against known topics and plots them by who and how often the author connects with. There isn't any bias in the engine at all.

As for the conservative hashtags in the article, the original study looked at a specific instance of tech used to smear a candidate in a way that could not be responded to. The fact that the incident originated in the conservative camp accounts for the hashtags that were mentioned, but has no bearing on focus in the current mapping technique used in IU's project, nor does it seem to be an indication of political lean in the article.

7. crunchycon - October 01, 2010 at 03:12 pm

and those that originated with liberals were conveniently ignored, eh?

8. rgregory - October 01, 2010 at 03:20 pm

@crunchycon:

Not at all. Or maybe, by definition, yes. The original study looked at a specific, one time, historical incident that just happened to originate in the conservative camp. It happened in the researchers' own state at the state election level. It was newsworthy, and so they studied it. That really is all there is to it.

9. corkie - October 01, 2010 at 04:17 pm

Both sides of the political continuum are starting to smear opponents, and, at least from what I've seen locally, the Democrats seem to be in the lead this year for attack politics.
It will be interesting to see how balanced the results of this software service are compared to what we see online.

10. arrive2__net - October 02, 2010 at 02:30 am

I think research like this recognizes that any side of a controversy could try to manipulate the way information and misinformation is distributed on the internet. I think the dynamics it measures is something savvy web users have to know. The understanding of web dynamics driven by this research is something that could be used for good or ill, but both side can get an understanding of what really happened, through this type of analysis.

Although attacks like that described in the article would be difficult to fend off, it seems like their effects would be limited where the targets (the voters) understand that advocates may be trying to manipulate them, and because election day itself is very late in the process to try to make-up people's minds. However, it suggests in close races that both sides may want to have teams on line, ready to react and fend off such attacks

Bernard Schuster
Arrive2.net

Add Your Comment

Commenting is closed.