Snap‘s My AI chatbot is under the microscope in the UK, as concerns rise about its potential privacy risks for kids. The Information Commissioner’s Office (ICO), the UK’s authority on data privacy, has issued an initial warning to Snap. This move puts pressure on the tech company to prove that its chatbot doesn’t endanger the privacy of its younger users.

Snap has over 21 million monthly users in the UK alone, most of them youngsters

The ICO’s early findings reveal that Snap may have overlooked critical privacy risks for children when launching My AI. While this initial notice isn’t a final judgement, it does mean that Snap could face action, including a possible ban of My AI in the UK if it doesn’t make changes.

Snap

Snap defended itself, stating that My AI underwent thorough legal and privacy checks before its public launch. The stakes are high for Snap, which has 21 million monthly users in the UK alone, many of whom are teenagers aged between 13 and 17. My AI’s launch marked the first time a generative AI chatbot was added to a major messaging app in the country.

But it’s not just data privacy that has people worried. Parents are also concerned about the ethical implications of their kids interacting with AI. Some find it difficult to teach their children the difference between talking to a human and a machine, especially when the chatbot is designed to simulate human-like conversations.

This issue isn’t new for the ICO, which earlier fined TikTok £12.7 million for mishandling kids’ personal information. It’s a stark reminder that regulatory bodies are increasingly stepping in to safeguard children as technology advances.

RELATED:

(Via)