Deeper Learning
Catching dangerous content material within the age of AI
In the final 10 years, Big Tech has grow to be actually good at some issues: language, prediction, personalization, archiving, textual content parsing, and knowledge crunching. But it’s nonetheless surprisingly dangerous at catching, labeling, and eradicating dangerous content material. One merely wants to recall the unfold of conspiracy theories about elections and vaccines within the United States over the previous two years to perceive the real-world injury this causes. The ease of utilizing generative AI may turbocharge the creation of extra dangerous on-line content material. People are already utilizing AI language fashions to create pretend information web sites.
But may AI assist with content material moderation? The latest giant language fashions are much higher at deciphering textual content than earlier AI methods. In principle, they could possibly be used to increase automated content material moderation. Read extra from Tate Ryan-Mosley in her weekly e-newsletter, The Technocrat.
Bits and Bytes
Scientists used AI to discover a drug that might combat drug-resistant infections
Researchers at MIT and McMaster University developed an AI algorithm that allowed them to discover a new antibiotic to kill a sort of micro organism accountable for many drug-resistant infections which might be widespread in hospitals. This is an thrilling improvement that reveals how AI can speed up and help scientific discovery. (Ztoog)
Sam Altman warns that OpenAI may stop Europe over AI guidelines
At an occasion in London final week, the CEO stated OpenAI may “cease operating” within the EU if it can’t adjust to the upcoming AI Act. Altman stated his firm discovered much to criticize in how the AI Act was worded, and that there have been “technical limits to what’s possible.” This is probably going an empty menace. I’ve heard Big Tech say this many occasions earlier than about one rule or one other. Most of the time, the chance of shedding out on income on this planet’s second-largest buying and selling bloc is simply too large, they usually determine one thing out. The apparent caveat right here is that many corporations have chosen not to function, or to have a restrained presence, in China. But that’s additionally a really completely different state of affairs. (Time)
Predators are already exploiting AI instruments to generate baby sexual abuse materials
The National Center for Missing and Exploited Children has warned that predators are utilizing generative AI methods to create and share pretend baby sexual abuse materials. With highly effective generative fashions being rolled out with safeguards which might be insufficient and straightforward to hack, it was solely a matter of time earlier than we noticed instances like this. (Bloomberg)
Tech layoffs have ravaged AI ethics groups
This is a pleasant overview of the drastic cuts Meta, Amazon, Alphabet, and Twitter have all made to their groups centered on web belief and security in addition to AI ethics. Meta, for instance, ended a fact-checking mission that had taken half a yr to construct. While corporations are racing to roll out highly effective AI fashions of their merchandise, executives like to boast that their tech improvement is protected and moral. But it’s clear that Big Tech views groups devoted to these points as costly and expendable. (CNBC)