In a recent paper published in Nature Human Behavior, researchers at the Oxford Internet Institute have warned about the alarming tendency of Large Language Models (LLMs) used in chatbots to hallucinate. LLMs are designed to generate helpful and convincing responses without any guarantees regarding their accuracy or alignment with fact.
The paper emphasizes that LLMs often use online sources which can contain false statements, opinions, and inaccurate information. Users often trust LLMs as a human-like information source, due to their design as helpful, human-sounding agents. This can lead users to believe that responses are accurate even when they have no basis in fact or present a biased or partial version of the truth.
The researchers stress the importance of information accuracy in science and education and urge the scientific community to use LLMs as “zero-shot translators.” This means that users should provide the model with the appropriate data and ask it to transform it into a conclusion or code, rather than relying on the model itself as a source of knowledge. This approach makes it easier to verify that the output is factually correct and aligned with the provided input.
While LLMs will undoubtedly assist with scientific workflows, it is crucial for the scientific community to use them responsibly and maintain clear expectations of how they can contribute. The researchers emphasize that LLMs should be used as tools rather than replacements for human experts in order to ensure their accuracy and reliability.