Hundreds of AI experts sign a letter to regulate Deepfakes

Published February 22nd, 2024 - 07:48 GMT
Hundreds of AI experts sign a letter to regulate Deepfakes
Biometric technology digital Face Scanning form lines, triangles and particle style design (Shutterstock)
Highlights
AI-generated fabricated media and Deepfakes are a growing concern for industry experts, who have united in an open call to facilitate laws against harmful use of such artificially created content, with hundreds of signatures.

ALBAWABA - Experts in artificial intelligence and business leaders, such as Yoshua Bengio, a Canadian computer scientist and one of the pioneers of the field, have signed an open letter advocating for greater oversight over the production of deepfakes as reported by Reuters, citing potential threats to society.

Deepfakes are artificially altered synthetic media that adequately swap out one person's resemblance for a different one, it uses powerful machine learning and artificial intelligence methods to create and manipulate video and audio material that is easy to trick someone into believing the content is real, causing experts to worry about their misuse and potential harm.

The letter title “Disrupting the Deepfake Supply Chain,” authored by Andrew Critch, an AI researcher at UC Berkeley, states that “deepfakes often involve sexual imagery, fraud, or political disinformation. Since AI is progressing rapidly and making deepfakes much easier to create, safeguards are needed.”

Signed by more than 700 hundred backers, the letter indicates several reasons behind this open call; such as Nonconsensual pornography, which is a concerning fast growing industry that often targets minors, with 98 percent of Deepfake content online being AI-generated pornography, according to the paper, with 99 percent of those targeted being women, which the paper claims is a flow of a tech trend that facilitates gender-based violence.

Fraud and impersonation are another concern in the paper, with reports of Deepfakes fraud increasing 3000 percent in 2023 alone, noting that scammers need a single photo of a person to create a fabricated video of them.

Deepfakes effects on political elections are also worrying the experts, as the widespread creation of fake political content could affect the public opinion incorrectly and thus harm the democratic process, urgently calling for a change as the number of Deepfakes circulating around has increased by 550 percent between 2019 and 2023.

One proposed solution to prevent misinformation and ease the detection of AI generated content is digital seals, as the paper writes that cameras can produce an authenticity signature similar to website certifications or login credentials, which if widely implemented with open-source authentication tools would better warn the public, the paper states that “device manufacturers, software developers, and media companies should work together and popularize these or similar content authentication methods.”

Proposed laws to prevent harmful actors include full criminalization of AI generated sexual content the involves minors, even fictional ones, as well as legally punishing criminals who would create or facilitate the spread of harmful deepfakes with knowledge, adding that Socially conscious companies may be fostered by such legislation, which wouldn't have to be disproportionately harsh.
 

Subscribe

Sign up to our newsletter for exclusive updates and enhanced content