AI could replace humans in social science research
Listen to this article
In an article published yesterday in the prestigious journal Science, leading social science researchers look at how AI (large language models or LLMs in particular) could change the nature of their work.
What the researchers say: “What we wanted to explore in this article is how social science research practices can be adapted, even reinvented, to harness the power of AI,” said the lead author.
He and colleagues note that large language models trained on vast amounts of text data are increasingly capable of simulating human-like responses and behaviors. This offers novel opportunities for testing theories and hypotheses about human behavior at great scale and speed.
Traditionally, social sciences rely on a range of methods, including questionnaires, behavioral tests, observational studies, and experiments. A common goal in social science research is to obtain a generalized representation of characteristics of individuals, groups, cultures, and their dynamics. With the advent of advanced AI systems, the landscape of data collection in social sciences may shift.
“AI models can represent a vast array of human experiences and perspectives, possibly giving them a higher degree of freedom to generate diverse responses than conventional human participant methods, which can help to reduce generalizability concerns in research,” the researchers explained.
“LLMs might supplant human participants for data collection. In fact, LLMs have already demonstrated their ability to generate realistic survey responses concerning consumer behavior. Large language models will revolutionize human-based forecasting in the next 3 years. It won’t make sense for humans unassisted by AIs to venture probabilistic judgments in serious policy debates. Of course, how humans react to all of that is another matter.”
While opinions on the feasibility of this application of advanced AI systems vary, studies using simulated participants could be used to generate novel hypotheses that could then be confirmed in human populations.
But the researchers warn of some of the possible pitfalls in this approach – including the fact that LLMs are often trained to exclude socio-cultural biases that exist for real-life humans. This means that sociologists using AI in this way couldn’t study those biases.
The team notes that researchers will need to establish guidelines for the governance of LLMs in research.
“Pragmatic concerns with data quality, fairness, and equity of access to the powerful AI systems will be substantial,” they said. “So, we must ensure that social science LLMs, like all scientific models, are open-source, meaning that their algorithms and, ideally, data are available to all to scrutinize, test, and modify. Only by maintaining transparency and replicability can we ensure that AI-assisted social science research truly contributes to our understanding of human experience.”
So, what? The question of how bias interacts with AI is an interesting one. I personally do not see how an AI system can be free of bias, though I agree with the researchers that those biases may not reflect the biases of a particular community under study.
I suspect that AI bias will be one of the issues that regulators will have to grapple with.
Another point the researchers make is the reaction of humans to working with AI. It will mean, obviously, that we will need far fewer social scientists. Those that are left will suffer greater rates of burnout—a number of studies (see past issues of TR) have recorded that the dehumanization of AI-assisted work leads to both burnout and depression.
Join the discussion
More from this issue of TR
AI could replace humans in social science research
"It won’t make sense for humans unassisted by AIs to venture probabilistic judgments in serious policy debates. Of course, how humans react to all of that is another matter.”
Media coverage of climate change research does not inspire action
“Research on human behavior shows that fear can lead to behavioral change in individuals and groups, but only if the problem presented is accompanied by solutions.”
You might be interested inBack to Today's Research
Workplace AI revolution isn't happening yet
The UK, and other nations, risk a growing divide between organizations who have invested in new, AI-enabled digital technologies and those who haven’t.
Artificial intelligence favors white men under 40
The language models we use in our daily lives while we’re utilizing search engines, machine translation, or engaging with chatbots and commanding Siri, speak the language of some groups better than others. Even a slight bias can have serious consequences in contexts where precision is key.
When humanlike chatbots miss the mark in customer service interactions
Chatbots are increasingly replacing human customer-service agents on companies’ websites, social media pages, and messaging services. Designed to mimic humans, these bots often have human names humanlike appearances and the capability to converse like humans.
Join our tribe
Subscribe to Dr. Bob Murray’s Today’s Research, a free weekly roundup of the latest research in a wide range of scientific disciplines. Explore leadership, strategy, culture, business and social trends, and executive health.