Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

No, Google Bard is not trained on Gmail data

Bard is a generative AI tool that can get things wrong

Google's large language model tool named Bard says that it was trained with Gmail — but Google has denied that is the case.

Bard is a generative AI or Large Language Model (LLM) which can provide information based on its large data set. Like ChatGPT and similar tools, it isn't actually intelligent and will often get things wrong, which is referred to as "hallucinating."

A tweet from Kate Crawford, author and principal researcher at Microsoft Research, shows a Bard response suggesting Gmail was included in its dataset. This would be a clear violation of user privacy, if true.

But, Google's Workspace Twitter account responded, stating that Bard is an early experiment and will make mistakes — and confirmed that the model was not trained with information gleaned from Gmail. The pop-up on the Bard website also warns users that Bard will not always get queries right.

These generative AI tools aren't anywhere near foolproof, and users with access often try to pull out information that would otherwise be hidden. Queries such as Crawford's can sometimes provide useful information, but in this case, Bard got it wrong.

Generative AI and LLMs have become a popular topic in the tech community. While these systems are impressive, they are also filled with early problems.

Users are urged, even by Google itself, to fall back onto web search whenever an LLM like Bard provides a response. While it might be interesting to see what it will say, it is not guaranteed to be accurate.