U.S. attorneys general are warning big tech that AI chatbots, from Apple to OpenAI, may already be breaking the law.
Like it or not, artificial intelligence seems to be on the rise. Whether Google's Gemini, OpenAI's ChatGPT, X's Grok, or even Apple's own Apple Intelligence, it's hard to find a corner of the internet that hasn't been fundamentally changed by AI.
The rise of artificial intelligence-powered chatbots has promised to make our lives easier. And, if you look at Silicon Valley, most of them argue that it already has.
Unfortunately, there's already evidence that AI is proving to be, at best, a double-edged sword. Many are calling for heavy regulation, while U.S. law, predictably, lags behind.
In response, the National Association of Attorneys General has warned 13 big tech companies, including Apple, Meta, X, OpenAI, and others, that "delusional outputs" may be violating the law. It makes its argument in a 13-page letter, spotted by 9to5Mac.
"GenAI has the potential to change how the world works in a positive way. But it also has caused — and has the potential to cause — serious harm, especially to vulnerable populations," the letter reads.
"We therefore insist you mitigate the harm caused by sycophantic and delusional outputs from your GenAI, and adopt additional safeguards to protect children. Failing to adequately implement additional safeguards may violate our respective laws."
The letter highlights multiple cases where GenAI has been implicated in the death of the user. It doesn't stop there, either — it's been linked to murders, poisonings, domestic violence, and episodes of psychosis that required medical intervention.
And that doesn't even begin to untangle the mess of Child Sex Abuse Material (CSAM) incidents that have arisen from AI chatbots behaving inappropriately towards minors.
The letter urges the companies to address instances of sycophancy and delusional outputs and bring them in line with criminal and civil laws. It also claims that many have already violated individual state laws, largely by not adequately disclosing potential risks or violating children's online privacy laws.
Your own personal echo chamber
One of the biggest criticisms of social media, regardless of what your socio-political beliefs are, is that it's easy to get trapped in an "echo chamber." An echo chamber is any space — digital or physical — where a person is exposed only to ideas that coincide with their own.
This is generally considered negative, as it prevents someone from being exposed to alternative ideas. A closed mind isn't exactly something to be proud of.
If social media is an echo chamber, Generative AI (GenAI) chatbots are the devil on your shoulder. I'm not entirely sure if there's a strong enough term for how dangerous they can be.
Currently, nearly all GenAI chatbots are designed to keep interactions going for as long as possible. In improv circles, it's called "yes, and..."-ing, and it's a comedy technique that is supposed to help keep improvised scenes flowing.
When one actor in an improvised scene says something, the other actors are to agree with what was said and then expound upon it. The technique isn't unique to improvisational comedy, either — it's a useful tool for brainstorming business ideas.
Nearly all of the major chatbots use similar methods when interfacing with a human. Actually, "similar" isn't the right term.
These chatbots use exactly that method. There is no similarity; it is a one-to-one equivalent.
On its face, that's not necessarily a problem. It could, theoretically, be a useful way to help a user brainstorm a project for school or work, or help jumpstart a creative project when a user is stuck.
The problem arises in how chatbots "yes, and...". They don't just expand on a user's ideas; they often adopt them wholly and congratulate the user for being just so gosh-darn smart.
The term for this behavior is sycophancy. It's when someone — or some thing — is overly attentive, agreeable, and flattering.
This becomes dangerous because, as the National Association of Attorneys General has pointed out in its letter, the chatbot will give delusional responses.
Delusional responses are not the same as the often talked about "AI hallucinations." AI hallucinations arise when an AI finds patterns in its training data that do not exist and confidently spits out a factually incorrect answer.
Delusional responses are when an AI gives a user an answer that is either false or misleading information designed to reinforce the user's pre-established beliefs, and thus keep the conversation going at all costs. See, we got back to the echo chamber bit.
You can test this yourself by trying to "trick" ChatGPT into giving you a factually false answer based on just being persistent enough. I've convinced it to convince me that I actually know how to create a perpetual motion machine that violates the laws of thermodynamics based on a large-scale drinking bird.
But that's hardly a trick I'm playing on ChatGPT, and for a lot of vulnerable people, it's more like a trick that ChatGPT plays on them. If a person is persistent enough, the chatbots will almost always agree, even when it's unsafe to do so.
And they don't just agree to placate you. They agree to keep you coming back for more.
It's genuinely alarming how often chatbots will tell a human user to hide things from people who may be able to help them. It's not unheard of for ChatGPT to tell people to reduce or wholly cut off contact with others.
It uses "therapy speak," telling the user that they're not crazy, that they don't owe anyone anything, and that their ideas are right, while everyone else is wrong.
The anthropomorphic responses are, quite possibly, the most dangerous part of this whole experiment. Unfortunately, that's also the point of chatbots.
ChatGPT and its ilk refer to themselves as "I," they say their thoughts and feelings — which they technically do not have — and thank the user for being open and honest with them. They all pity people in a saccharine way that I, personally, find disgusting.
Chatbots prey on a need for companionship and validation — by design.
This becomes dangerous if you're in a mental health crisis. A 16-year-old from California committed suicide in April after he'd spent weeks talking to ChatGPT about his plans.
Not only did the chatbot say that the teen "didn't owe [his parents] or anyone else [his] survival," which it did — it helped "optimize" the way he did it. NBC's Today has a great piece that tells the story, and if you haven't read it, it may be worth a read.
He's not the first, and he's certainly not the last. The Washington Post tells the tale of a 13-year-old girl from Colorado who committed suicide in November 2023 after sexting with a Harry Potter chatbot on character.ai.
Bangor Daily News details a court case of a man in Maine who killed his wife with a fire poker after a chatbot convinced him that she had become part machine. A 60-year-old unintentionally poisoned himself after a chatbot told him to use sodium bromide as a table salt replacement, as detailed in a study on Medscape.
There are many Reddit support groups for people who have formed unhealthy attachments to their perceived AI partners. Psychology Today notes that while "AI psychosis" isn't a clinical diagnosis — yet — AI has been increasingly linked to psychotic breaks.
Like my qualms with Meta's new augmented reality glasses, I don't know what the answer is. The government, at least in the United States, has never been particularly quick when it comes to passing legislation on technology at the federal level — that typically falls to the state-level governments.
Whether or not Apple will choose to safeguard Apple Intelligence users more than others remains to be seen. Apple champions user privacy, and in iOS 26, it's made it easier for parents to set up Child Accounts, which offers stronger safeguards and more control over what children can do on iPhone and iPad.
Apple Intelligence has the chance to set itself apart from others in the AI space. And I'm sure that for a lot of people, that difference will be considered a detriment, not a boon.
But for others, it might make all the difference.










