Can AI Replace Scientists?

Can AI Replace Scientists?

image provided by pixabay

This post is also available in: heעברית (Hebrew)

AI based tools are already being used by scientists to help with scientific work, but research suggests that trusting AI might lead to more results but less understanding.

Researchers at Yale and Princeton Universities published a paper in ‘Nature’ that presents the potential failings of this approach to AI’s role in science, especially with all of its recent malfunctions, ethical concerns, and unpredictability.

According to Cybernews, scientists envision AI’s long-term role in academic work in several possible ways: Some see AI as an ‘Oracle’ that is capable of processing extensive literature, assessing source quality, and generating hypotheses. Others think the role of AI is to simulate data, as it can for example enhance the study of phenomena with limited data availability by creating additional data to augment the research.

Social sciences see AI as a potential research participant to answer questionnaires, since GenAI tools can be trained to represent a wide range of human experiences and provide a more accurate picture of behavior and social dynamics. Quant, or predictive AI, can uncover patterns in huge amounts of data that are predictive but beyond the reach of human cognition. It could also be used for tasks that previously demanded extensive human effort (annotating and interpreting text, images, and qualitative data).

Nevertheless, despite this glowing potential for innovation in science, the researchers warn that this may cause the science world to “produce more but understand less” – trusting AI tools to compensate for our cognitive limitations can lead to a narrow scientific focus where certain methods and ideas dominate, limiting innovation and increasing the chance of errors.

Using AI to replace human participants in research could remove contextual nuances that are usually preserved by qualitative methods. Furthermore, creating the data to train such AI models requires human-influenced decisions that in turn could impart the algorithms with the values of their creators.

The researchers argue that scientific teams that are diverse in demographics and ethics are more effective problem-solvers – trusting AI to do all that process eliminates the element of diversity and creates the illusion of objectivity.

Having said that, the researchers conclude that they do not necessarily call for the complete abandonment of AI in research. “Scientists interested in using AI in their research and researchers who study AI must evaluate these risks now, while AI applications are still nascent because they will be much more difficult to address if AI tools become deeply embedded in the research pipeline,” conclude the researchers.

This information was provided by Cybernews.