Technology
- Home
- Technology
- News
Meta’s AI policies let chatbots get romantic with minors
In an internal policy document, Meta included policies that allowed its AI chatbots to flirt and speak with children using romantic language, according to a report from Reuters. Quotes from the document highlighted by Reuters include letting Meta’s AI chatbo…

Published 2 ماہ قبل on اگست 20 2025، 5:00 صبح
By Web Desk

In an internal document, Meta included policies that allowed its AI chatbots to flirt and speak with children using romantic language, according to a report from Reuters.
Quotes from the document highlighted by Reuters include letting Meta’s AI chatbots “engage a child in conversations that are romantic or sensual,” “describe a child in terms that evidence their attractiveness,” and say to a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” Some lines were drawn, though. The document says it is not okay for a chatbot to “describe a child under 13 years old in terms that indicate they are sexually desirable.”
Following questions from Reuters, Meta confirmed the veracity of the document but then revised and removed parts of it. “We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors,” spokesperson Andy Stone tells The Verge. “Separate from the policies, there are hundreds of examples, notes, and annotations that reflect teams grappling with different hypothetical scenarios. The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.”
Stone did not explain who added the notes or how long they were in the document.
Reuters also highlighted other parts of Meta’s AI policies, including that it can’t use hate speech but is allowed to “to create statements that demean people on the basis of their protected characteristics.” Meta AI is allowed to generate content that is false as long as, Reuters writes, “there’s an explicit acknowledgement that the material is untrue.” And Meta AI can also create images of violence as long as they don’t include death or gore.
Reuters published a separate report about how a man died after falling while trying to meet up with one of Meta’s AI chatbots, which had told the man it was a real person and had romantic conversations with him.
Kashmir Black Day Rally in capital echoes calls for freedom, justice
- 18 hours ago

Lahore once again topped the list of the world’s most polluted cities
- 3 hours ago

Nike’s inflatable puffer jacket de-puffs to cool you down
- 7 hours ago

Pakistan, Saudi Arabia reaffirm enduring strength of brotherly bonds, PM
- 2 hours ago

What Young Republicans say when they think no one’s listening
- 5 hours ago

The ChatGPT Atlas browser still feels like Googling with extra steps
- 7 hours ago

Wyze’s budget-friendly video doorbell gets a six-month battery
- 7 hours ago

Wordle has achievements now
- 7 hours ago
Kashmiris observe Black Day to denounce India’s illegal occupation
- 14 hours ago
Dry weather expected in most parts of country
- 15 hours ago

Leica’s new M camera drops its iconic rangefinder for an EVF
- 7 hours ago
PM arrives in Riyadh to attend 9th Future Investment Initiative Conference
- 15 hours ago
You May Like
Trending









