Apple commits to developing safe, secure, and trustworthy AI: here are the new initiatives | Festina Lente - Your leading source of AI news | Turtles AI
Highlights:
- Apple commits to developing safe, secure, and trustworthy AI.
- Imminent launch of Apple Intelligence in core products.
- Commitment to White House for safety and transparency standards.
- Department of Commerce’s role in analyzing open-source models.
Apple commits to developing safe, secure, and trustworthy AI: Discover the new initiatives and the role of AI in the future of iOS
Apple has signed the White House’s voluntary commitment to developing safe, secure, and trustworthy AI. The company will soon launch its generative AI offering, Apple Intelligence, integrating it into its core products, putting the technology in front of Apple’s 2 billion users.
Apple joins 15 other technology companies, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, that committed to the White House’s guidelines for developing generative AI in July 2023. At the time, Apple had not yet detailed its plans for integrating AI into iOS, but at the June WWDC, it made its intentions clear: the company is going all-in on generative AI, starting with a partnership that will embed ChatGPT into the iPhone. As a frequent target of federal regulators, Apple wants to signal early its willingness to adhere to the White House’s AI rules, possibly to curry favor before any future regulatory battles on AI arise.
However, how binding are Apple’s voluntary commitments to the White House? Not very, but it’s a start. The White House calls this the “first step” toward Apple and 15 other AI companies developing safe, secure, and trustworthy AI. The second step was President Biden’s AI executive order in October, and several bills are currently moving through federal and state legislatures to better regulate AI models.
Under the commitment, AI companies promise to red-team (acting as an adversarial hacker to stress test an organization’s safety measures) AI models before a public release and share that information with the public. The White House’s voluntary commitment also asks AI companies to treat unreleased AI model weights confidentially. Apple and other companies agree to work on AI model weights in secure environments, limiting access to model weights to as few employees as possible. Lastly, AI companies agree to develop content labeling systems, such as watermarking, to help users distinguish what is AI-generated from what is not.
Separately, the Department of Commerce announced that it would soon release a report on the potential benefits, risks, and implications of open-source foundation models. Open-source AI is increasingly becoming a politically charged regulatory battlefield. Some camps want to limit how accessible model weights to powerful AI models should be in the name of safety, but doing so could significantly limit the AI startup and research ecosystem. The White House’s stance here could have a significant impact on the greater AI industry.
The White House also noted that federal agencies have made significant progress on tasks set out by the October executive order. Federal agencies have made more than 200 AI-related hires to date, awarded more than 80 research teams’ access to computational resources, and released several frameworks for developing AI.
These technological and regulatory developments represent a crucial step towards the responsible and secure integration of AI into Apple’s products and services, and those of other tech companies. The future of AI, with the active involvement of giants like Apple, promises to be characterized by security, transparency, and reliability.