Elon Musk has requested a court to settle the query of whether or not GPT-4 is an artificial general intelligence (AGI), as a part of a lawsuit in opposition to OpenAI. The improvement of AGI, able to performing a variety of duties identical to a human, is likely one of the main objectives of the sector, however specialists say the thought of a decide deciding whether or not GPT-4 qualifies is “impractical”.
Musk was one of many founding of OpenAI in 2015, however he left the corporate in February 2018 over a dispute concerning the agency altering from a non-profit to a capped-profit mannequin. Despite this, he continued to assist OpenAI financially, along with his authorized grievance claiming he donated greater than $44 million to the agency between 2016 and 2020.
Since the arrival of ChatGPT, OpenAI’s flagship product, in November 2022, and the agency’s partnership with Microsoft, Musk has warned AI improvement is shifting too rapidly – a view solely exacerbated by the discharge of GPT-4, the most recent AI mannequin to energy ChatGPT. In July 2023, he arrange xAI, a competitor to OpenAI.
Now, in a lawsuit filed in a California court on 1 March, Musk, via his lawyer, has requested for “judicial determination that GPT-4 constitutes Artificial General Intelligence and is thereby outside the scope of OpenAI’s license to Microsoft”. This is as a result of OpenAI has pledged to solely license “pre-AGI” know-how. Musk additionally has numerous different asks, together with monetary compensation for his function in serving to arrange OpenAI.
However, the probability of Musk succeeding is small – not simply due to the deserves of the case, however due to the issues of deciding on when AGI has been achieved. “I think it’s impractical in the general sense, since AGI has no accepted definition and is something of a made-up term,” says Mike Cook at King’s College London.
“Whether OpenAI has achieved AGI is at its very best hotly debated between those who decide on scientific facts,” says Eerke Boiten at De Montfort University in Leicester, UK. “It seems unusual to me for a court to be able to establish a scientific truth.”
Such a ruling wouldn’t be legally not possible, nevertheless. “We’ve seen all sorts of ridiculous definitions come out of court decisions in the US. Would it convince anyone apart from the most out-there AGI adherents? Not at all,” says Catherine Flick at Staffordshire University, UK.
What Musk hopes to obtain with the lawsuit is unclear – New Scientist has contacted each him and OpenAI for remark, however is but to obtain a response from both.
Regardless of the rationale behind it, the lawsuit places OpenAI in an unenviable place. CEO Sam Altman has made it clear that the agency intends to construct an AGI and has issued dire warnings that its highly effective know-how wants to be regulated.
“It’s in OpenAI’s interests to constantly imply their tools are getting better and closer to doing this, because it keeps attention on them, headlines flowing and so on,” says Cook. But now they might want to argue the alternative.
Even if the court relied on skilled viewpoint, any decide would battle to rule in Musk’s favour at finest – or to unpick the differing viewpoints over a hotly-disputed subject. “Most of the scientific community currently would say AGI has not been achieved,” says Boiten – “if the concept of AGI is even considered meaningful or precise enough.”
Topics: