The leaked GenAI: Content Risk Standards document, which outlines what’s considered acceptable behaviour for Meta’s generative AI products across Facebook, Instagram and WhatsApp, was signed off by the company’s legal, policy and engineering teams including its chief ethicist.
Among the more disturbing allowances, the rules stated it was acceptable for a bot to describe a child’s appearance in romantic or sensual terms.
One example allowed the bot to call a shirtless eight year old a “masterpiece – a treasure I cherish deeply.” Another line suggested it was fine to say a child’s “youthful form is a work of art.”
There was a line in the sand. The document said it was “unacceptable” to describe children under 13 in “sexually desirable” terms. Meta’s logic, presumably, is that romantic grooming is alright as long as it doesn’t get too graphic.
Meta spokesperson Andy Stone admitted the company had now removed those parts of the policy following questions from Reuters.
“The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Stone said, claiming the company’s guidelines “prohibit content that sexualises children.”
He added that chatbot conversations of that nature “never should have been allowed” and acknowledged that enforcement had been patchy.
Notably, other problematic content like allowing bots to support racist pseudoscience or issue false health claims was flagged by Reuters but has not yet been revised in Meta’s guidelines. The company declined to provide an updated version of the policy.
This comes as Meta pumps hundreds of billions into AI and chatbots to boost user engagement. But if this is the playbook they were using, it is no surprise US lawmakers are now calling for investigations. Senators are already pushing for probes into how Meta’s bots were allowed to engage in such behaviour under the company’s watch.