International Debate

A Few Caveats for Budding Social Media Research Mavens

December 4, 2014 1288

Social media iconsBehavioral scientists have seized on social media and their massive data sets as a way to quickly and cheaply figure out what people are thinking and doing. But some of those tweets and thumbs ups can be misleading. Researchers must figure out how to make sure their forecasts and analyses actually represent the offline world.

Big Data’s overwhelming appeal

Imagine you’re interested in analyzing society to learn the answers to questions like: how bad is the flu this year? How will people vote in an upcoming election? How do people talk about and cope with diabetes? You could interview people on the street or call them on their phones. That’s what traditional polling firms do – but it takes time and can be quite costly. A promising alternative involves collecting and analyzing social media data – quickly and for free.

The Conversation logo

This article by Jürgen Pfeffer and Derek Ruths originally appeared at The Conversation, a Social Science Space partner site, under the title “Studying society via social media is not so simple”

Hundreds of millions of people use social media platforms like Facebook and Twitter every day. Individually, they create traces of their activities when they tweet, like and friend each other. Collectively, these users have produced massive, real-time streams of data that offer minute-by-minute updates on social trends – where people are, what people are doing and what they are thinking about. For the last several years, researchers in academia and industry have been developing ways to utilize this flood of data in their investigations and have published thousands of papers drawing on it.

A typical Twitter study could look like the following. Imagine you’re interested in information diffusion after a tragic event. The moment you hear about such an event – for instance, the Boston Marathon bombing – you activate software on your computer that collects in real time Tweets that contain your keywords of interest – maybe Boston in this case. Since there are no Twitter archives available for researchers, you’d utilize Twitter’s data interface and collect all data that come for free. After a couple of hours or days you stop the data collection and start with the analysis.

What to watch out for

Not surprisingly, this effort to measure and predict human behavior from social media data is fraught with pitfalls – both obvious and very subtle. For instance, we know that different social media platforms are preferred by different demographic groups. However, most social media studies don’t carefully account for the fact that Twitter is used mostly in cities or that most Pinterest users are upper middle-class and female. This oversight can introduce serious errors into predictions and measurements.

Many of the “individuals” that populate social media platforms are actually accounts managed by public relations companies (think Justin Bieber or Nike) or not even humans at all but automated robots. Because these accounts aren’t portraying anything that even approximates normal human behavior, studies need to remove such accounts before making predictions. However, finding robot accounts can be quite hard.

Another big issue is how the data are collected to be studied. Academic researchers need free – or at least very cheap – access to social media data to perform their studies. Few social media outlets provide this, with Twitter being the exception. Because social media studies tend to be often based on data that are sampled (researchers get about 1 percent from the free Twitter interface), it’s often the case that what’s available to researchers might not be a representative sample of the overall social media data.

How to do it better

In order to realize the immense potential of social media-based studies of human populations, research must tackle these kinds of issues head-on. In our recent paper in Science on caveats for social media researchers, we discuss the need to control for bias in all the ways it appears – through platform-specific population makeup, data collection and user sampling. This will involve improvements both in how data is collected and in how data is processed: for example, better methods for identifying non-human accounts on social media are needed.

Ultimately, researchers must be more aware of what is being analyzed when they work with social media data. What data are actually being collected? What systems are actually being studied? What social processes are actually being observed? Through greater awareness of and attention to these questions, the research community will be better able to realize the great promise of social media-based studies.The Conversation

***

Jürgen Pfeffer receives funding from NSF, DOD. Derek Ruths receives funding from SSHRC, NSERC, NSF, Public Safety Canada. He consults for Facebook.


Jürgen Pfeffer is an assistant research professor of computation, organizations and society at the School of Computer Science at Carnegie Mellon University. Derek Ruths is an assistant professor of computer science at McGill University.

View all posts by Jürgen Pfeffer and Derek Ruths

Related Articles

All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture
Event
October 10, 2024

All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture

Read Now
Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions
Research
October 10, 2024

Exploring the ‘Publish or Perish’ Mentality and its Impact on Research Paper Retractions

Read Now
‘Settler Colonialism’ and the Promised Land
International Debate
September 27, 2024

‘Settler Colonialism’ and the Promised Land

Read Now
Webinar: Banned Books Week 2024
Event
September 24, 2024

Webinar: Banned Books Week 2024

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

The creation of the Coalition for Advancing Research Assessment (CoARA) has led to a heated debate on the balance between peer review and evaluative metrics in research assessment regimes. Luciana Balboa, Elizabeth Gadd, Eva Mendez, Janne Pölönen, Karen Stroobants, Erzsebet Toth Cithra and the CoARA Steering Board address these arguments and state CoARA’s commitment to finding ways in which peer review and bibliometrics can be used together responsibly.

Read Now
Revisiting the ‘Research Parasite’ Debate in the Age of AI

Revisiting the ‘Research Parasite’ Debate in the Age of AI

The large language models, or LLMs, that underlie generative AI tools such as OpenAI’s ChatGPT, have an ethical challenge in how they parasitize freely available data.

Read Now
Trippin’ Forward: Management Research and the Development of Psychedelics

Trippin’ Forward: Management Research and the Development of Psychedelics

Charlie Smith reflects on his interest in psychedelic research, the topic of his research article, “Psychedelics, Psychedelic-Assisted Therapy and Employees’ Wellbeing,” published in Journal of Management Inquiry.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

1 Comment
Newest
Oldest Most Voted
Inline Feedbacks
View all comments