West Virginia's attorney general believes iCloud is the greatest platform ever made to distribute child porn, and is the first government to sue Apple after a previous class action failed.
McCuskey's office also cited a 2020 text message from a then-Apple anti-fraud lead as saying that Apple's decisions over the way it handles iCloud and the storing of images make it "the greatest platform for distributing child porn."
In a statement to Reuters, Attorney General JB McCuskey argued that Apple's inaction is "inexcusable."
"These images are a permanent record of a child's trauma, and that child is revictimized every time the material is shared or viewed," McCuskey said in the statement.
His office is now seeking statutory and punitive damages. The suit also asks a Mason County Circuit Court judge to force Apple to take new measures to detect abusive material.
McCuskey's office also called this case the first of its kind by a government agency in relation to the distribution of child sexual abuse material, or CSAM, on Apple's servers.
While Apple has previously denied any wrongdoing in similar lawsuits, it has yet to publicly respond this time.
It's not clear what proof West Virginia will provide. Also not clear is how it will prove monetary damages, or law-breaking by Apple. A public host is one thing in the eyes of the law, like X and Grok-generated CSAM.
The legal responsibility for non-public data repositories are murky. Section 230 of the Communications Decency Act will likely be invoked by Apple to try to get the suit dismissed as it has in others. How that will be done, and how Section 230 will be applied to non-public browsable data repositories held by companies not generating the content, remains to be seen.
Apple's abandoned solution
Apple's attempts at dealing with CSAM caused controversy in 2021 when the company announced plans for an automated detection system. That system would check hashed versions of files stored in iCloud against the hashes of known CSAM material.
Files identified as CSAM would then be reported to the National Center for Missing & Exploited Children. But the plans drew the ire of privacy advocates who argued that the system could be adapted by governments to identify other kinds of material.
Apple confirmed that it had shelved its CSAM detection plans in December 2022. In August 2023, Erik Neuenschwander, Apple's director of user privacy and child safety, sought to explain the decision.
Neuenschwander argued that scanning every user's iCloud data would open the door for data thieves and other threat actors. He added that the move would be "a slippery slope of unintended consequences."
A muddled approach
Apple's current system includes a number of systems that it believes help protect against CSAM distribution. Images of sensitive material are automatically blurred in devices used by children, for example.
While welcome, features like this only work when the recipient doesn't want to see such material. They do little to deter those who are purposely sharing CSAM via Apple's servers.
While, at first blush, the option to enable end-to-end encryption for iCloud data may offer protection for those spreading CSAM, that might not be the case.
Apple calls its system Advanced Data Protection for iCloud system, but that's really just a fancy term for end-to-end encryption. And that system protects the majority of a user's iCloud data, but not all of it.
Enabling Advanced Data Protection for iCloud will encrypt data stored in iCloud Backups, Photos, Notes, and more. But there are some notable instances where even that encrypted data is handled in an unencrypted way.
Apple's support documentation notes that encryption protects data shared via its iCloud Shared Photo Library, iCloud Drive shared folders, and shared Notes features. But anything shared via iWork collaboration, the Shared Albums feature in Photos, and sharing content with "anyone with the link," is not.
In use, that means that any CSAM shared via those methods could, potentially, be identified by Apple. But until Apple shares more details about how it deals with CSAM detection in 2026, it's unclear whether that actually happens.
Apple has so far been cagey about exactly what other steps it takes to prevent the spreading of CSAM. The company also reports the CSAM material that it discovers to authorities but, again, little is known about how that works, and Apple is known to report the least between it, Google, and Facebook.
Apple's position in the market is that of the privacy-first option. That has so far prevented the company from taking traditional — and publicly loud — steps to identify CSAM.
Google, Microsoft, and other platform holders routinely scan photos and email attachments for CSAM identifiers.
Apple is likely to leverage Section 230 of the Communications Decency Act in its defense. The law offers some protections for internet companies, preventing them from being held responsible for content generated by their users, assuming a good-faith moderation effort is taken for public-facing user-generated content.
However, it's unclear whether the company can argue that it isn't responsible for user content while also reporting that same content to authorities.






