At the behest of the Biden administration, Apple will be part of a new consortium formed by the US government to support safe development of artificial intelligence.
On Thursday, US Commerce Secretary Gina Raimondo rolled out the new initiative called the US AI Safety Institute Consortium" (AISIC). It was spawned after an executive order on October, mandating that the US lead the way in safe AI development.
"The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence," Raimondo said in a release on Thursday. "President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That's precisely what the U.S. AI Safety Institute Consortium is set up to help us do."
"Through President Biden's landmark Executive Order, we will ensure America is at the front of the pack - and by working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America's competitive edge and develop AI responsibly," Raimondo added.
Apple has signed on to the effort, after avoiding involvement in others. It joins a host of 200 other companies working in the field. The list spans big tech, defense contractors, educational institutions, and energy companies.
Some key names are below.
- Cisco Systems
- Hewlett Packard
- Northrop Grumman
The executive order that was the initiating step for the formation of the AISIC has six key principles.
- Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government
- Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy
- Protect against the risks of using AI to engineer dangerous biological materials
- Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content
- Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software
- Order the development of a National Security Memorandum that directs further actions on AI and security
AISIC appears to be the first step towards fulfilling most of the requirements of the order. It's not clear yet if there are different tiers of membership, and the participants' involvement requirements have not yet been disclosed.
The Executive Order in October preceded a multinational effort in November to lay out safe frameworks for AI development. It's not yet clear how the two efforts dovetail, or if they do at all.