Technology
- Home
- Technology
- News
Meta’s AI policies let chatbots get romantic with minors
In an internal policy document, Meta included policies that allowed its AI chatbots to flirt and speak with children using romantic language, according to a report from Reuters. Quotes from the document highlighted by Reuters include letting Meta’s AI chatbo…

Published 24 days ago on Aug 20th 2025, 5:00 am
By Web Desk

In an internal document, Meta included policies that allowed its AI chatbots to flirt and speak with children using romantic language, according to a report from Reuters.
Quotes from the document highlighted by Reuters include letting Meta’s AI chatbots “engage a child in conversations that are romantic or sensual,” “describe a child in terms that evidence their attractiveness,” and say to a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” Some lines were drawn, though. The document says it is not okay for a chatbot to “describe a child under 13 years old in terms that indicate they are sexually desirable.”
Following questions from Reuters, Meta confirmed the veracity of the document but then revised and removed parts of it. “We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors,” spokesperson Andy Stone tells The Verge. “Separate from the policies, there are hundreds of examples, notes, and annotations that reflect teams grappling with different hypothetical scenarios. The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.”
Stone did not explain who added the notes or how long they were in the document.
Reuters also highlighted other parts of Meta’s AI policies, including that it can’t use hate speech but is allowed to “to create statements that demean people on the basis of their protected characteristics.” Meta AI is allowed to generate content that is false as long as, Reuters writes, “there’s an explicit acknowledgement that the material is untrue.” And Meta AI can also create images of violence as long as they don’t include death or gore.
Reuters published a separate report about how a man died after falling while trying to meet up with one of Meta’s AI chatbots, which had told the man it was a real person and had romantic conversations with him.

Trump’s presidency is a cash grab
- 5 hours ago

Death toll from floods in Punjab exceeds 100
- 11 minutes ago

DG ISPR meets with FAST University students
- 4 minutes ago

Field Marshal visits flood-affected areas, expresses sympathy with victims
- an hour ago

Donald Trump is lying about political violence
- 5 hours ago

Insomnia increases risk of dementia in elderly people, study finds
- 3 hours ago

7.1 magnitude earthquake hits eastern Russia, tsunami feared
- an hour ago

How to save Social Security without screwing over poor people
- 5 hours ago

The Democrats’ shutdown debate is about something much bigger
- 5 hours ago

Israeli attacks in Gaza: 65 Palestinians martyred, two houses destroyed
- 3 hours ago

What Charlie Kirk meant to young conservatives
- 5 hours ago

FC headquarters to be shifted from Peshawar to Islamabad
- 17 minutes ago
You May Like
Trending