Australia could soon force the Apple App Store to remove AI apps that let users access adult or violent content without age verification.
Though Apple has already complied with Australia's social media ban for teens by updating the App Store's age-assurance tools, the iPhone maker might soon need to take further action.
To be more specific, app marketplaces will likely be required to block AI apps that have not implemented age-checking measures. Regulators in Australia are targeting artificial intelligence apps that let users under 18 access adult content, extreme violence, self-harm, and eating disorder content.
As Reuters notes, app storefronts that do not remove offending AI apps by March 9 may be subject to fines of up to to $35 million. A representative of Australia's eSafety commissioner said that regulators would use the full range of their powers in the event of noncompliance.
This, among other things, includes "action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services."
In other words, Apple will most likely be forced to remove AI applications without proper age verification in place, as a means of preventing minors from accessing violent and otherwise objectionable content.
Additionally, eSafety expressed concerns about the prolonged use of AI chatbots by children, noting that some ten-year-olds in Australia spend up to six hours a day talking to artificial intelligence software.
Australia's regulators claimed that "AI companies are leveraging emotional manipulation, anthropomorphism, and other advanced techniques to entice, entrance, and entrench young people into excessive chatbot usage."
Not all AI apps have complied with the regulators' demands
Despite eSafety's efforts, however, not all AI applications and services have complied with Australia's age-assurance requirements. The report explains that, out of 50 AI platforms surveyed, only nine of them had implemented the necessary age-verification measures.
Of the 50 artificial intelligence apps, 11 applications introduced blanket content filters. This is true for most AI chatbots available in Australia, including OpenAI's ChatGPT, Replika, and Anthropic's Claude.
Character.AI, meanwhile, has restricted open-ended chats to users 18 and over; HammerAI has outright prevented users in Australia from accessing its products and services.
The remaining AI applications offered no functional age-filtering systems, and some of them didn't even have an email address where users could report potential breaches, claims the report.
Elon Musk's Grok, meanwhile, has implemented measures to prevent the generation of pornographic content involving minors, albeit only in regions where it was required to do so by law. Apple, notably, said nothing about the matter at the time.
While many of the applications mentioned in the report are available on the App Store, they don't all share the same rating. Character.AI, for instance, is rated 18+, while Grok carries a 16+ rating. The ChatGPT app, meanwhile, is rated 13+.
These age ratings seem unevenly applied and enforced, which could soon prove to be a problem.
At the time of writing, Apple has already upgraded the age assurance tools available to developers. Fines are not out of the question, even with the updated age verification tools.






