Microsoft calls for more Regulations to protect itself from Fraud and Manipulation | Google Machine Learning Course | | Ai in Healthcare Examples | Turtles AI
Microsoft Calls for Regulations Against Deepfake AI to Combat Fraud and Abuse
Key Points:
- Microsoft urges federal law against deepfake fraud
- Brad Smith highlights risks to elections, seniors and minors
- FCC has already banned automated calls with AI voices
- Need to clearly label AI content for greater transparency
Washington, D.C. - Microsoft has urged the U.S. Congress to introduce tough regulations to counter the misuse of AI-generated deepfakes. Brad Smith, the company’s vice president and president, stressed the need for urgent action to protect elections, seniors from fraud and children from abuse.
In a recent blog post, Smith said that despite the efforts of the technology industry and nonprofit organizations, current laws must evolve to effectively combat deepfake-related fraud. "One of the most important steps the United States can take is to pass a comprehensive deepfake fraud law," Smith said, stressing the importance of providing a clear legal framework for law enforcement to prosecute AI-related crimes.
Microsoft’s proposal includes the adoption of a "deepfake fraud statute," aimed at providing adequate legal tools to address scams and manipulations created with the help of AI. In addition, Smith asked lawmakers to update federal and state laws regarding child sexual exploitation and non-consensual intimate images to include AI-generated content.
Recently, the Senate passed a bill allowing victims of sexually explicit deepfakes to sue the creators for damages, a move that follows incidents in which students fabricated explicit images of classmates. Platforms such as X (formerly Twitter) have also been invaded by fake AI graphics, including those involving celebrities such as Taylor Swift.
Microsoft itself has implemented stricter security controls in its AI products, following the discovery of a loophole in its image creator Designer AI, which allowed the creation of explicit images of celebrities. Smith stressed that the private sector has a responsibility to innovate and implement security measures to prevent the misuse of AI technology.
The Federal Communications Commission (FCC) has already banned the use of automated calls with AI-generated voices, but the increasing ease of creating fake audio, images, and videos with generative AI tools is an ongoing challenge, especially in the run-up to the 2024 presidential election. Recently, a deepfake video imitating Vice President Kamala Harris was shared on X by Elon Musk, further highlighting the need for regulations.
Smith proposed that Congress require AI system providers to implement state-of-the-art sourcing tools to label synthetic content. "This is essential to build trust in the information ecosystem and help the public better understand whether content is AI-generated or manipulated," he concluded.
Microsoft’s call for stricter regulations underscores the urgency of addressing the risks associated with deepfakes, thereby protecting individuals and society from the potential threats of this rapidly evolving technology.