
AI hallucinations pose a ‘serious risk’ to scientific research, warns Oxford study
In recent times, Large Language Models (LLMs) have been increasingly utilized in chatbots, and studies have shown a high probability of them generating false information. Researchers at the Oxford Internet Institute warn that these AI hallucinations pose a direct threat to science and scientific truth. According to their paper published in Nature Human Behaviour, LLMs are designed to produce helpful and convincing responses without any guarantees regarding their accuracy or alignment with fact.
These models are treated as sources of knowledge and used to provide information in response to questions or prompts. However, the data they are trained on may not always be accurate. Online sources, which the models use, can sometimes contain false statements, opinions, and inaccurate information. The researchers caution that people often anthropomorphize LLMs and trust them as a human-like information source, regardless of their accuracy.
The importance of information accuracy cannot be overstated when it comes to science and education. To ensure factual correctness and alignment with provided input, the scientific community is urged to use LLMs as “zero-shot translators.” This means providing the model with appropriate data and asking it to transform it into a conclusion or code rather than relying on the model itself as a source of knowledge.
While Oxford professors believe that LLMs will undoubtedly assist with scientific workflows, they emphasize the need for responsible usage and clear expectations for how they can contribute to scientific research. By doing so, we can harness the potential of these models while minimizing the risks associated with their use.
In conclusion, Large Language Models (LLMs) have become increasingly popular in recent years due to their ability to generate helpful responses quickly and efficiently. However, this does not come without risks – specifically the likelihood of generating false information. It is important for researchers and scientists to approach LLMs responsibly and maintain clear expectations for how they can contribute to scientific research while minimizing potential risks associated with their use.