International researchers have declared that generative artificial intelligence models are “inherently controllable, predictable and safe”.
As a result, they pose no existential threat to humanity.
A research team from the University of Bath and the Technical University of Darmstadt in Germany said large language models like ChatGPT could not learn independently or acquire new skills.
These AI models were likely to generate more sophisticated language and become better at following detailed prompts, but they were “highly unlikely” to gain complex reasoning skills.
“The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus,” Study co-author Harish Tayyar Madabushi said.
“While it’s important to address the existing potential for the misuse of AI, such as the creation of fake news and the heightened risk of fraud, it would be premature to enact regulations based on perceived existential threats.
“Importantly, what this means for end users is that relying on large language models (LLMs) to interpret and perform complex tasks which require complex reasoning without explicit instruction is likely to be a mistake.
“Instead, users are likely to benefit from explicitly specifying what they require models to do and providing examples where possible for all but the simplest of tasks.”
The study, published as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, found that LLMs had a superficial ability to follow instructions and excel at proficiency in language.
However, they had no potential to master new skills without explicit instruction.
The collaborative research team, led by Professor Iryna Gurevych at the Technical University of Darmstadt in Germany, ran experiments to test the ability of LLMs to complete tasks that models had never come across before.
Through thousands of experiments, the team demonstrated that a combination of the AI model’s ability to follow instructions, memory and linguistic proficiency defined their capabilities and limitations.
“Our results do not mean that AI is not a threat at all,” Professor Gurevych said.
“Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all.”