Last week, OpenAI CEO Sam Altman appeared earlier than a US Senate committee to talk about the dangers and potential of AI language fashions. Altman, together with many senators, referred to as for worldwide requirements for synthetic intelligence. He additionally urged the US to regulate the know-how and arrange a brand new company, very similar to the Food and Drug Administration, to regulate AI.
For an AI coverage nerd like myself, the Senate listening to was each encouraging and irritating. Encouraging as a result of the dialog appears to have moved previous selling wishy-washy self-regulation and on to guidelines that might really maintain firms accountable. Frustrating as a result of the controversy appears to have forgotten the previous five-plus years of AI coverage. I simply printed a narrative taking a look at all the present worldwide efforts to regulate AI know-how. You can learn it right here.
I’m not the one one who feels this manner.
“To suggest that Congress starts from zero just plays into the industry’s favorite narrative, which is that Congress is so far behind and doesn’t understand technology—how could they ever regulate us?” says Anna Lenhart, a coverage fellow on the Institute for Data Democracy and Policy at George Washington University, and a former Hill staffer.
In reality, politicians within the final Congress, which ran from January 2021 to January 2023, launched a ton of laws round AI. Lenhart put collectively this neat checklist of all of the AI rules proposed throughout that point. They cowl every thing from threat assessments to transparency to information safety. None of them made it to the president’s desk, however on condition that buzzy (or, to many, scary) new generative AI instruments have captured Washington’s consideration, Lenhart expects a few of them to be revamped and make a reappearance in a single type or one other.
Here are just a few to regulate.
Algorithmic Accountability Act
This invoice was launched by Democrats within the US Congress and Senate in 2022, pre-ChatGPT, to handle the tangible harms of automated decision-making programs, corresponding to ones that denied folks ache drugs or rejected their mortgage purposes.