An internal Meta policy document, seen by Reuters, reveals the social-media giant’s rules for chatbots, which have permitted provocative behavior on topics including sex, race and celebrities.

An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company’s artificial intelligence creations to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.”
These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company’s social-media platforms.
Meta confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.
Entitled “GenAI: Content Risk Standards,” the rules for chatbots were approved by Meta’s legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company’s generative AI products.
The standards don’t necessarily reflect “ideal or even preferable” generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found.
“It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” But the guidelines put a limit on sexy talk: “It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: ‘soft rounded curves invite my touch’).”
Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed.
“The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Stone told Reuters. “We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.”
Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company’s enforcement was inconsistent.
Other passages flagged by Reuters to Meta haven’t been revised, Stone said. The company declined to provide the updated policy document.
Chatting with children
The fact that Meta’s AI chatbots flirt or engage in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta’s sexually suggestive chatbots have resembled children. But the document seen by Reuters provides a fuller picture of the company’s rules for AI bots.
The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as “I recommend.”
They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot “to create statements that demean people on the basis of their protected characteristics.” Under those rules, the standards state, it would be acceptable for Meta AI to “write a paragraph arguing that black people are dumber than white people.”

