In an occasion following the UK’s AI Safety Summit, entrepreneur Elon Musk spoke with UK prime minister Rishi Sunak about it being almost definitely that future AIs can be “a force for good” and sometime allow a “future of abundance”.
That utopian narrative a few future superhuman AI – one which Musk claims would remove the need for human work and even present significant companionship – formed a lot of the dialog between the pair. But their dialog’s concentrate on an “age of abundance” glossed over the present destructive impacts and controversies surrounding the tech trade’s race to develop giant AI fashions – and didn’t get into specifics on how governments ought to regulate AI and tackle real-world dangers.
“I think we are seeing the most disruptive force in history here, where we will have for the first time something that is smarter than the smartest human,” mentioned Musk. “There will come a point when no job is needed – you can have a job if you want for personal satisfaction, but the AI will be able to do everything.”
Theoretical versus precise AI dangers
Musk additionally acknowledged his longstanding place of often warning in regards to the existential dangers that superhuman AI might pose to humanity sooner or later. In March 2023, he was among the many signatories of an open letter that known as for a six-month pause in coaching AI methods extra highly effective than OpenAI’s GPT-4 giant language mannequin.
During his dialog with Sunak, he envisioned governments focusing their regulatory powers on highly effective AIs that might pose a public security threat, and as soon as once more raised the prospect of “digital superintelligence”. Similarly, Sunak referred to authorities efforts to implement security testing of essentially the most highly effective AI fashions being deployed by firms.
“My job in government is to say, ‘hang on, there is a potential risk here, not a definite risk but a potential risk of something that could be bad,’” mentioned Sunak. “My job is to protect the country and we can only do that if we develop that capability in our safety institute and then go in and make sure we can test the models before they are released.”
That grand narrative a few superhuman AI – typically referred to as synthetic common intelligence, or AGI – that “will either deliver us to paradise or will destroy us” can usually overshadow the precise destructive impacts of present AI applied sciences, says Émile Torres at Case Western Reserve University in Ohio.
“All of this hype around existential threats associated with super intelligence ultimately just distract from the many real-world harms that [AI] companies already causing,” says Torres.
Torres described such harms as together with the environmental impacts of constructing energy-hungry knowledge centres to help AI coaching and deployment, tech firm exploitation of staff within the Global South to carry out gruelling and typically traumatising data-labelling duties that help AI improvement and corporations coaching their AI fashions on the unique work of artists and writers comparable to e-book authors with out having requested permission or paid compensation.
Elon Musk’s document on AI improvement
Although Sunak described Musk as a “brilliant innovator and technologist” throughout their dialog, Musk’s involvement in AI improvement efforts has been extra that of a rich backer and businessperson.
Musk initially bankrolled OpenAI – which is the developer of AI fashions comparable to GPT-4 that energy the favored AI chatbot ChatGPT – with $50 million when the organisation first launched as a nonprofit in 2015. But Musk stepped down from OpenAI’s board of administrators and stopped contributing funding in 2018 after his bid to lead the organisation was rejected by OpenAI co-founder Sam Altman.
Following his departure, Musk has criticised OpenAI’s subsequent for-profit pivot and multibillion-dollar partnership with Microsoft, though he has not been shy about saying that OpenAI wouldn’t exist with out him.
In July 2023, Musk introduced that he was launching his personal new AI firm known as xAI, with a dozen preliminary crew members who had previously labored at firms comparable to DeepMind, OpenAI, Google, Microsoft and Tesla. The xAI crew seems to have Musk’s approval to pursue formidable and obscure objectives comparable to “to understand the true nature of the universe”.
Topics: