The Facebook-owned WhatsApp regularly boasts of using end-to-end encryption and keeping communications between users private, but a report alleges that some monitoring of messages does take place, and that Mark Zuckerberg may not have told the truth to the U.S. Senate.
In 2016, WhatsApp announced it was using end-to-end encryption for all communications on its platform, covering everything from messages to file transfers. The use of end-to-end encryption is intended to offer users a level of privacy and security, but it seems that may not be true for the messaging app.
In a report by ProPublica, it is claimed WhatsApp employs more than 1,000 contract workers in Austin, Texas, Dublin, and Singapore, specifically for examining "millions of pieces of users' content." The workers "use special Facebook software" to look through messages and content that have been flagged by WhatsApp users, and have been screened by AI systems.
The reviews occur in spite of an assurance that appears in the app before users send messages for the first time, claiming "No one outside of this chat, not even WhatsApp, can read or listen to them."
In 2018 testimony to the U.S. Senate, Facebook CEO Macrk Zuckerberg claimed "We don't see any of the content in WhatsApp."
The report adds that the claims are bolstered by a whistleblower complaint filed with the U.S. Securities and Exchange Commission in 2020. The complaint said about WhatsApp's use of external contractors, AI, and account information for monitoring user messages, images, and videos, and that WhatsApp's claims about protecting user privacy are false.
"We haven't seen this complaint," said a WhatsApp spokesperson. The SEC has also, so far, not acted on the complaint in public.
While the claims of monitoring user messages may be an indication that WhatsApp's end-to-end encryption may not be completely secure, there may still be some truth to its privacy credentials.
As content is encrypted, automated systems are unable to scan the content by default. Instead, after a user reports a message, it and four previous messages, as well as supporting content, are sent to WhatsApp in a non-encrypted form.
In effect, end-to-end encryption is maintained with no backdoors in the encryption itself, but WhatsApp's app is able to leak the information out at either end of the communication.
In a statement, WhatsApp said it builds the app "in a manner that limits the data we collect while providing us tools to prevent spam, investigate threats, and ban those engaged in abuse, including based on user reports we receive." WhatsApp also emphasized the trust and safety team, the work of security experts, and its introduction of new privacy features.
This is not the first time WhatsApp had to deal with allegations surrounding its encrypted messaging service. In 2017, there were claims a backdoor was discovered that allowed Facebook to see the contents of encrypted messages. At the time, WhatsApp denied there was a backdoor in use.
In 2021, privacy policy changes caused another headache for WhatsApp, one that had WhatsApp updating business chat logs so they could be stored on Facebook servers. Users were wary of the change, insisting it was a grab by Facebook for personal data.
Updated at 11:36 A.M. Eastern: Facebook reached out to AppleInsider shortly after publication to reiterate points that we already made in the article. In turn, we asked how WhatsApp's moderation squares with Zuckerberg's testimony before Congress in 2018.
"WhatsApp provides a way for people to report spam or abuse, which includes sharing the most recent messages in a chat," Facebook said in a return email. "This feature is important for preventing the worst abuse on the internet. We strongly disagree with the notion that accepting reports a user chooses to send us is incompatible with end-to-end encryption."
14 Comments
So it only gets triggered when one of the conversation participants reports the contact? That seems entirely in line with expectations, what else would a user expect to happen when they report content? If the complaint is that the user who gets reported should have their privacy respected then that seems decidedly bogus to me.
No fan of Facebook or WhatApp, but I don't see anything to be concerned by here, save perhaps the use of external contractors.
So if I forward a screen snap of my WhatsApp conservation to somebody else, does that mean that Mark Z is lying to Congress under oath and is committing a felony?
That’s why I deleted Whatsapp, Instagram and Facebook accounts and apps from all my devices. Never regretted this move.