A consortium formed by OpenAI, Google, Microsoft, and artificial intelligence safety firm Anthropic, says the companies will establish best practices for the AI industry, although so far it will be doing so without Apple.
Just as Apple was noticeably absent from a separate AI safety initiative announced. by the White House, so its failing to join this new consortium raises questions. None of the companies involved have commented on whether Apple was even invited to either initiative, but both projects aim to be industry-wide.
"Today, Anthropic, Google, Microsoft and OpenAI are announcing the formation of the Frontier Model Forum," announced Google in a blog post, "a new industry body focused on ensuring safe and responsible development of frontier AI models."
"The Frontier Model Forum will draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem," it continued, "such as through advancing technical evaluations and benchmarks, and developing a public library of solutions to support industry best practices and standards."
The definition of a frontier AI system, says the consortium, is "large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks.
"Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control," Brad Smith, vice-chair and president of Microsoft, said in the announcement.. "This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity."
Overall, the new project aims to establish best practices for controlling AI. But AI expert Emily Bender of the University of Washington, has told Ars Technica the new consortium is really intended as "an attempt to avoid regulation; to assert the ability to self-regulate."
Bender is "very skeptical" of AI firms self-regulating, and says that regulation should come from governments "representing the people to constrain what these corporations can do."
Back in 2016, Google and Microsoft were also founding members of a similar organization called The Partnership on AI. Notably, Apple was a member — and still is today.
18 Comments
Meaningless. All these companies will pursue AI and the untold riches it will bring with little or no accountability.
Apple brands itself as responsible and privacy forward. Perhaps they don’t feel the need to participate in fig-leaf events such as this, that they’re going to do the right thing because that’s what they do? Also, I think Apple may end up being a client of AI rather than a purveyor, much like they get screens from Samsung. So it’s up the vendors to assure the safety of their wares.
From what I read on this organization, it seems to be mostly “regulating” specifics of how this software will work, rather than the more theoretical concepts of how to determine when it becomes dangerous, and how to prevent that. I can see Apple not being interested in that view, at least, at this time.
Or maybe Apple knows how advanced their technology is. Why would you sit down at the poker table when you invented a new roulette machine.