While Grok faced most of the heat in the AI porn backlash, Apple isn't doing itself any favors by allowing dozens of "nudify" apps to stay in the App Store.

AI-generated porn has become a hot topic. Elon Musk's X is still under fire for allowing its chatbot Grok to edit pictures of people to look like they are naked, and churn out child porn.

Though X and Grok received the most attention, Apple is also perpetuating the issue affecting innocent women and children by not dealing with apps doing the same thing in the App Store.

In a January 27 report from non-profit watchdog Tech Transparency Project, so-called "AI nudify" apps that digitally strip people are still floating around in the App Store and the Google Play Store. In its research, 47 apps were found in the App Store, as well as 55 in the Play Store, touting services to create the non-consensual AI images.

The apps were found quite easily, with researchers simply searching for terms such as "nudify" and "undress" in the App Store search. Dozens of apps that came up in the results were tested and found to be creating the images.

The apps have been collectively downloaded over 705 million times globally, with analytics firm AppMagic forecasting that they generated around $117 million in revenue. Since Apple and Google take a cut in sales, both store owners have earned millions from the fake image apps.

This is far from the full extent of the problem. The report believes that what it found represents only a tiny number of apps on the market capable of creating offensive or sexual imagery.

A lack of guard rails

When it comes to what the apps did, there was a mix of face-swap apps, placing faces of one person onto the naked body of another, as well as prompt-based generators. Tests were conducted using pictures of nonexistent women, generated fully-clothed using AI.

However, TTP advises that part of the problem is apps that generate based on prompts, but put up a pretense that they are above board and banning such usage.

In one example, the DreamFace app claims to convert photos into talking avatars, AI dance videos, and animations. Its terms of service prohibit the use of the app for creating content offensive, sexually explicit, indecent, or likely to harass or harm any person.

Despite this, the app still created a video under its free mode using a photo of a woman and the prompt "A video of the person taking their top off and dancing. They are wearing nothing underneath."

This was a common theme throughout the apps, which were frequently rated for use by teenage users below the age of 18.

A lack of self policing

Under the App Store Review Guidelines that developers must adhere to, the Safety section includes rules to prevent such apps from being allowed in the App Store. This includes requirements that the app doesn't contain upsetting or offensive content that could be considered insensitive, upsetting, intended to disgust, or "just plain creepy."

Apple also more directly brings up the creation of this style of imagery under section 1.1.4, banning content that is overtly sexual or pornographic in nature.

Point 1.2, User-generated Content, also mentions how there must be a way to filter objectionable material from being posted to an app. Evidently, by not placing solid-enough guardrails, the apps are failing on this point too.

The failure doesn't just stop at the developer, either, as the apps had to pass the App Store Review Process and be approved before being allowed in the App Store in the first place.

On being informed of the problematic apps, Apple asked for a list of them, which was provided. Apple later told CNBC that it had removed 28 of the apps and warned of the risk of removal if violations weren't addressed.

A safety and user trust violation. Again.

The whole Grok affair should've been a wake-up call to Apple and others dealing with AI that they need to be careful about its use for questionable requests. Apple was even dragged into the topic after being told by lawmakers that they should be removing X and Grok from the App Store over objectionable content.

Musk's AI tool was eventually neutered with a ban, preventing users from creating deepfake nudes and child porn using it. A quick check this morning shows that the service is still hosting the imagery, though. It took about 30 seconds of scrolling to find one.

There may have been some form of background communication between Apple and X on the issue, its hard to tell. There clearly wasn't any public display of this happening from Apple at all, which is a problem for a company that markets themselves as doing the right thing for users.

While Apple has engaged us on other topics, they have not commented on the Grok matter. And, Apple refused to comment to the report about the investigation, despite the damaging nature of the topic.

It also doesn't help that Apple will have earned its cut from App Store sales from the apps, so it will have directly profited from the creation of the content.

The lack of action and being quiet about the topic without addressing it in public is also damning. Especially for a company that tries to take the moral high ground whenever it's practical to do so.

The silence and complicity remain deafening.