May 8, 2024
AI may replace humans when it comes to collecting data for social science research: paper

AI may replace humans when it comes to collecting data for social science research: paper


A team of researchers from four Canadian and American universities say artificial intelligence could replace humans when it comes to collecting data for social science research.


Researchers from the University of Waterloo, University of Toronto, Yale University and the University of Pennsylvania published an article in the journal Science on June 15 about how AI, specifically large language models (LLMs), could affect their work.


“AI models can represent a vast array of human experiences and perspectives, possibly giving them a higher degree of freedom to generate diverse responses than conventional human participant methods, which can help to reduce generalizability concerns in research,” Igor Grossmann, professor of psychology at Waterloo and a co-author of the article, said in a news release.


Philip Tetlock, a psychology professor at UPenn and article co-author, goes so far as to say that LLMs will “revolutionize human-based forecasting” in just three years.


In their article, the authors pose the question: “How can social science research practices be adapted, even reinvented, to harness the power of foundational AI? And how can this be done while ensuring transparent and replicable research?”


The authors say the social sciences have traditionally relied on methods such as questionnaires and observational studies.


But with the ability of LLMs to pore over vast amounts of text data and generate human-like responses, the authors say this presents a “novel” opportunity for researchers to test theories about human behaviour at a faster rate and on a much larger scale.


Scientists could use LLMs to test theories in a simulated environment before applying them in the real world, the article says, or gather differing perspectives on a complex policy issue and generate potential solutions.


“It won’t make sense for humans unassisted by AIs to venture probabilistic judgments in serious policy debates. I put an 90 per cent chance on that,” Tetlock said. “Of course, how humans react to all of that is another matter.”


One issue the authors identified, however, is that LLMs often learn to exclude sociocultural biases, raising the question of whether models are correctly reflecting the populations they study.


Dawn Parker, a University of Waterloo professor and article co-author, suggests LLMs be open source so their algorithms, and even data, can be checked, tested or modified.


“Only by maintaining transparency and replicability can we ensure that AI-assisted social science research truly contributes to our understanding of human experience,” Parker said.

Source link