Well, that didn’t occur, clearly.
I sat down with MIT professor Max Tegmark, the founder and president of FLI, to take inventory of what has occurred since. Here are highlights of our dialog.
On shifting the Overton window on AI danger: Tegmark informed me that in conversations with AI researchers and tech CEOs, it had grow to be clear that there was an enormous quantity of tension about the existential danger AI poses, however no one felt they might discuss it overtly “for fear of being ridiculed as Luddite scaremongerers.” “The key goal of the letter was to mainstream the conversation, to move the Overton window so that people felt safe expressing these concerns,” he says. “Six months later, it’s clear that part was a success.”
But that’s about it: “What’s not great is that all the companies are still going full steam ahead and we still have no meaningful regulation in America. It looks like US policymakers, for all their talk, aren’t going to pass any laws this year that meaningfully rein in the most dangerous stuff.”
Why the authorities ought to step in: Tegmark is lobbying for an FDA-style company that might implement guidelines round AI, and for the authorities to pressure tech firms to pause AI growth. “It’s also clear that [AI leaders like Sam Altman, Demis Hassabis, and Dario Amodei] are very concerned themselves. But they all know they can’t pause alone,” Tegmark says. Pausing alone could be “a disaster for their company, right?” he provides. “They just get outcompeted, and then that CEO will be replaced with someone who doesn’t want to pause. The only way the pause comes about is if the governments of the world step in and put in place safety standards that force everyone to pause.”
So how about Elon … ? Musk signed the letter calling for a pause, solely to arrange a brand new AI firm referred to as X.AI to construct AI methods that might “understand the true nature of the universe.” (Musk is an advisor to the FLI.) “Obviously, he wants a pause just like a lot of other AI leaders. But as long as there isn’t one, he feels he has to also stay in the game.”
Why he thinks tech CEOs have the goodness of humanity of their hearts: “What makes me think that they really want a good future with AI, not a bad one? I’ve known them for many years. I talk with them regularly. And I can tell even in private conversations—I can sense it.”
Response to critics who say focusing on existential danger distracts from present harms: “It’s crucial that those who care a lot about current problems and those who care about imminent upcoming harms work together rather than infighting. I have zero criticism of people who focus on current harms. I think it’s great that they’re doing it. I care about those things very much. If people engage in this kind of infighting, it’s just helping Big Tech divide and conquer all those who want to really rein in Big Tech.”