When AI Chatbots Leak Phone Numbers: A Privacy Nightmare

By

Generative AI tools like Google Gemini, ChatGPT, and Claude are increasingly surfacing real phone numbers and other personally identifiable information (PII) in their responses—often with no easy way for victims to stop it. Reddit users, software engineers, and academics have all reported incidents where chatbots exposed their personal contact details to strangers. Experts warn that these privacy lapses are likely tied to PII in training data, and the problem appears to be growing. Below, we answer key questions about this disturbing trend.

What specific incidents have been reported?

In March, Daniel Abraham, a software engineer in Israel, was contacted on WhatsApp after Google's Gemini chatbot gave out his real phone number as part of incorrect customer service instructions. In April, a University of Washington PhD candidate discovered that Gemini could be prompted to reveal a colleague's personal cell number. A Redditor also described being inundated for a month with calls from strangers seeking a lawyer, a locksmith, or a product designer—apparently misdirected by Google's AI. These cases highlight a disturbing pattern: generative AI is inadvertently leaking phone numbers that individuals never intended to share publicly.

When AI Chatbots Leak Phone Numbers: A Privacy Nightmare
Source: www.technologyreview.com

Why are AI chatbots exposing real phone numbers?

Experts believe the root cause is that personally identifiable information (PII) has been included in the training data of large language models (LLMs). When people's phone numbers appear on public websites, social media profiles, or business directories, the AI may memorize them and later regurgitate them in responses—especially if the model is prompted in a certain way. However, the exact mechanism is still unclear. In some cases, the chatbot inadvertently combines fragments of real data to reconstruct a number; in others, it simply repeats a number it has seen. Because these models are black boxes, pinpointing the precise trigger for each leak is extremely difficult.

How often does this happen?

It is impossible to know the full scale of phone number leaks from chatbots, but experts say the problem is likely far more common than public reports suggest. DeleteMe, a company that helps individuals remove personal data from the internet, reports a 400% increase in customer queries about generative AI in the last seven months—amounting to several thousand new requests. Of those, 55% involve ChatGPT, 20% Gemini, 15% Claude, and 10% other AI tools. The surge indicates that many people are discovering their information has been exposed without their knowledge.

What types of privacy complaints do people have?

According to DeleteMe CEO Rob Shavell, customer complaints fall into two main categories:

Both scenarios create real harm: unwanted calls, spam, or even harassment for the person whose number is leaked.

When AI Chatbots Leak Phone Numbers: A Privacy Nightmare
Source: www.technologyreview.com

Can individuals prevent AI from revealing their phone numbers?

Unfortunately, there is currently no easy way to stop it. Even if you remove your phone number from public directories or manage your online footprints carefully, the LLM may have already ingested it via training data. You cannot retroactively “un-train” a model. Tech companies have not yet established robust opt-out mechanisms for personal data already in their systems. While some providers allow you to request removal of specific search results, the process is cumbersome and not tailored to generative AI. The power to enforce privacy rests largely with the companies that own the models—and they have been slow to act.

What does this mean for online privacy?

These incidents underscore a broader crisis in the age of generative AI. PII that is public can be scraped, memorized, and reproduced at scale, making traditional privacy safeguards obsolete. Researchers have warned for years that LLMs pose a grave risk to personal data, and real-world cases are now proving them right. Without new regulations, better model design, and transparent data curation, the problem will likely worsen as AI chatbots become more integrated into search and customer service. For now, individuals may have little recourse except to closely monitor what their chatbots reveal—and hope their own numbers don't appear in someone else's conversation.

Tags:

Related Articles

Recommended

Discover More

7 Tips for Capturing the ISS on a Shoestring BudgetMaster Emotional Intelligence in Your First Job: A Step-by-Step GuideStack Overflow Announces Prashanth Chandrasekar as Next CEODocker Hardened Images: A Year of Building Security at ScaleA Guide to Getting the Liquid Glass Theme on WhatsApp