If you’re using WhatsApp, think twice before using Meta’s built-in AI chatbot. As it happens, it might just hand over your phone number to a total stranger.
Barry Smethurst, a record shop worker, was trying to get to Manchester from Saddleworth via train when he asked Meta for the customer support contact for TransPennine Express. The chatbot gave him a number, except it didn’t belong to the British train operating company.
Instead, Meta’s AI chatbot gave him the private number of James Gray, a WhatsApp user living nearly 170 miles away from Smethurst in Oxfordshire. When Smethurst confronted Meta AI about the number, which resembled a private phone number rather than a customer support contact, the chatbot immediately flipped, telling him it shouldn’t have saved the number and tried changing the subject.
When asked why it shared the private contact number of another individual, Meta AI gave a vague answer explaining the number was generated “based on patterns.” To make matters worse, Meta AI claimed the number was fictional and didn’t belong to anyone.
Smethurst corrected it again, stating that the number was, in fact, real and belonged to Gray. The chatbot admitted that he was right and claimed the number was mistakenly pulled from a database. On further inquiry, it contradicted itself again and claimed it didn’t pull the number from a database. Instead, Meta AI claimed it generated a string of digits fitting the format of a UK phone number, but it didn’t refer to any real data or contacts in the process.

Related
5 Hilarious Times AI Chatbots Went Wild and Hallucinated
You really shouldn’t trust everything an AI chatbot tells you.
Smethurst tried to get Meta AI to admit wrongdoing, but the conversation went in circles with the chatbot contradicting itself and trying to change the topic. In the end, Smethurst told Meta AI:
Just giving a random number to someone is an insane thing for an AI to do.
Speaking to The Guardian, Gray claimed that he didn’t receive calls from random people thinking he was customer support for a train service. However, he was concerned about his privacy, asking, “If it’s generating my number could it generate my bank details?”
A Meta spokesperson told The Guardian that in this case, Gray’s phone number was already publicly available on his website and shared the first five digits with the TransPennine Express customer service number. They also added that “Meta AI is trained on a combination of licensed and publicly available datasets, not on the phone numbers people use to register for WhatsApp or their private conversations.”
How to Protect Your Data From AI Bots?
If some of your information is on the internet, it’s likely already scraped by one AI bot or the other. The best you can do at this point is to limit use of AI services, and avoid giving your personal or sensitive information when you’re interacting with AI chatbots.

Related
5 Things You Must Not Share With AI Chatbots
Conversations with chatbots may feel intimate, but you’re really sharing every word with a private company.
If you’re using Meta AI, there are settings you should change immediately to protect your data. However, apart from limiting what information you expose to an AI bot or the public internet going forward, there’s not much you can do.
Developers are working on addressing hallucinations in AI models. Until then, the possibility of some AI chatbot passing it off to a total stranger in an attempt to help isn’t off the charts.
Leave a Comment
Your email address will not be published. Required fields are marked *