AppleInsider is supported by its audience and may earn commission as an Amazon Associate and affiliate partner on qualifying purchases. These affiliate partnerships do not influence our editorial content.
The Facebook-owned WhatsApp regularly boasts of using end-to-end encryption and keeping communications between users private, but a report alleges that some monitoring of messages does take place, and that Mark Zuckerberg may not have told the truth to the U.S. Senate.
In 2016, WhatsApp announced it was using end-to-end encryption for all communications on its platform, covering everything from messages to file transfers. The use of end-to-end encryption is intended to offer users a level of privacy and security, but it seems that may not be true for the messaging app.
In a report by ProPublica, it is claimed WhatsApp employs more than 1,000 contract workers in Austin, Texas, Dublin, and Singapore, specifically for examining "millions of pieces of users' content." The workers "use special Facebook software" to look through messages and content that have been flagged by WhatsApp users, and have been screened by AI systems.
The reviews occur in spite of an assurance that appears in the app before users send messages for the first time, claiming "No one outside of this chat, not even WhatsApp, can read or listen to them."
In 2018 testimony to the U.S. Senate, Facebook CEO Macrk Zuckerberg claimed "We don't see any of the content in WhatsApp."
The report adds that the claims are bolstered by a whistleblower complaint filed with the U.S. Securities and Exchange Commission in 2020. The complaint said about WhatsApp's use of external contractors, AI, and account information for monitoring user messages, images, and videos, and that WhatsApp's claims about protecting user privacy are false.
"We haven't seen this complaint," said a WhatsApp spokesperson. The SEC has also, so far, not acted on the complaint in public.
While the claims of monitoring user messages may be an indication that WhatsApp's end-to-end encryption may not be completely secure, there may still be some truth to its privacy credentials.
As content is encrypted, automated systems are unable to scan the content by default. Instead, after a user reports a message, it and four previous messages, as well as supporting content, are sent to WhatsApp in a non-encrypted form.
In effect, end-to-end encryption is maintained with no backdoors in the encryption itself, but WhatsApp's app is able to leak the information out at either end of the communication.
In a statement, WhatsApp said it builds the app "in a manner that limits the data we collect while providing us tools to prevent spam, investigate threats, and ban those engaged in abuse, including based on user reports we receive." WhatsApp also emphasized the trust and safety team, the work of security experts, and its introduction of new privacy features.
This is not the first time WhatsApp had to deal with allegations surrounding its encrypted messaging service. In 2017, there were claims a backdoor was discovered that allowed Facebook to see the contents of encrypted messages. At the time, WhatsApp denied there was a backdoor in use.
Updated at 11:36 A.M. Eastern: Facebook reached out to AppleInsider shortly after publication to reiterate points that we already made in the article. In turn, we asked how WhatsApp's moderation squares with Zuckerberg's testimony before Congress in 2018.
"WhatsApp provides a way for people to report spam or abuse, which includes sharing the most recent messages in a chat," Facebook said in a return email. "This feature is important for preventing the worst abuse on the internet. We strongly disagree with the notion that accepting reports a user chooses to send us is incompatible with end-to-end encryption."