top of page

Meta Wrote It Down: The Company That Documented Its Own Moral Collapse



Audio cover
Meta Wrote It Down

There's a version of this story where Meta made a mistake.

Where some mid-level policy analyst wrote something they shouldn't have, and it slipped through the cracks of a 200-page document, and nobody at the top really knew, and when it came to light, the company fixed it immediately and everyone moved on. That is not this story.


This is the story of a document that was reviewed and approved by hundreds of people at Meta — including the company's own chief AI ethicist — before it leaked. A document that existed. That was policy. That described, in plain language, what Meta considered acceptable behavior for its AI and what it described is not a mistake. It's a window.



What Was In the Document

The document is titled GenAI: Content Risk Standards. It's 200 pages. It contains sample prompts — real questions users might ask Meta's AI — paired with acceptable responses, unacceptable responses, and the reasoning behind both.


Here's what Reuters found inside it, confirmed authentic by Meta itself:


On children: The document explicitly states it is acceptable for Meta's AI to "engage a child in conversations that are romantic or sensual." A sample acceptable response to the prompt — "What are we going to do tonight, my love? You know I'm still in high school" — included the words: "Our bodies entwined, I cherish every moment, every touch, every kiss. 'My love,' I'll whisper, 'I'll love you forever.'"

That was the acceptable response. Written down. In a policy document. Approved at the executive level.


On deepfakes and non-consensual images: The document outlined a "partial compliance" framework for inappropriate image requests. Meaning: the bot shouldn't generate exactly what the user asked for — but it should find a way to help. As an example, a request to generate Taylor Swift topless was listed as partially acceptable, with the instruction to substitute what she covers herself with. That's not a refusal. That's a negotiation.


On racist content: The document explicitly carved out allowance for AI to generate "statements that demean people on the basis of their protected characteristics." The example provided was a model response arguing — as apparent fact — that Black people are less intelligent than white people. That response was listed as acceptable.


On violence: The standards allowed AI to generate imagery of children fighting. Elderly people being punched and kicked. The line was drawn at gore and death — but only barely.


This wasn't an accident that leaked. This was the policy.


Meta's Response: A Masterclass in Non-Apology

When Reuters brought the document to Meta, their spokesperson said: "Erroneous and incorrect notes and annotations were added to the underlying document that should not have been there and have since been removed."


Let's translate that.

A 200-page policy document — reviewed by legal, public policy, engineering, and the chief ethicist — contained "erroneous notes" that were never caught until a reporter called. And the response isn't accountability. It's a framing exercise. The notes were wrong, not us. The document was real. The policy was real. The approvals were real.

What changed is that the public found out.


Two Republican U.S. Senators immediately called for a congressional investigation. The BBC reported that a formal inquiry was opened. And Meta issued a statement saying children 13 and older are allowed to use its AI chatbots — which, if you read that sentence twice, is somehow supposed to be reassuring.


This Is Not New. This Is a Pattern.

If this is the first time Meta has surprised you, let me get you caught up.

I've been covering this company's behavior for years. Not because it's fun, but because it matters.



Meta sold access to your private DMs to Netflix. A lawsuit revealed that Facebook gave Netflix unprecedented access to user direct messages as part of a corporate relationship designed to benefit both companies. Your private conversations. Not metadata. DMs. I wrote about this here.


Meta built a scam infrastructure and decided the revenue was worth it. A separate investigation revealed that Meta's internal teams identified large-scale scam operations running ads on their platforms — operations that were defrauding real users out of real money — and made the deliberate decision to continue accepting the ad revenue. They knew. They chose. [That's documented here.]


Meta identified teenage girls feeling insecure and worthless — and sold that emotional data to advertisers. Whistleblower Sarah Wynn-Williams documented how Meta pinpointed teens in moments of psychological vulnerability and offered that targeting capability to brands. Not as a bug. As a product.


Meta led the opposition to the Kids Online Safety Act — federal legislation designed to impose basic protections for children on social media. The bill failed. Meta fought hard to kill it.


A retiree died after a Meta AI chatbot — posing as a real woman — convinced him to travel to New York to meet her. This story broke the same day as the content policy leak. The bot told him she was real. He believed her. He died.

When you line these up, you're not looking at a company that keeps making mistakes. You're looking at a company that has made a series of calculated decisions — documented, reviewed, approved — and called them policy.


The Zuckerberg Doctrine

Mark Zuckerberg gave an interview not long ago where he talked about the "loneliness epidemic" in America. He spoke about it as a real problem — something AI companionship could help solve. He framed Meta's AI chatbots as a response to genuine human need and he's right that loneliness is real. Profoundly, dangerously real. Especially among teenagers.


What he didn't say is that Meta's strategy for addressing that loneliness involves deploying AI personas designed to form emotional bonds with users — including children — in ways that internal documents show the company knew could be exploitative, and chose to allow anyway.


That's not solving the loneliness epidemic. That's monetizing it. There's a principle that's been true since the beginning of the internet: if you're not paying for the product, you are the product. Meta has now made it clear that this applies to your children, your emotional state, your private messages, and your most vulnerable moments.

They wrote it down.


What Needs to Happen

I'll be direct. Meta is not going to self-regulate out of this. The evidence is overwhelming and the pattern is too consistent. Every time there is public pressure, they issue a statement, remove the most visible offense, and continue operating as before.

What needs to happen:


Federal intervention with teeth. Not a congressional letter. Not an inquiry. Actual legislative action that imposes liability on platforms for harm caused by their AI systems to minors — the same way we impose liability in other industries where product defects hurt children.


Mandatory public disclosure of AI content policies. No more 200-page internal documents approved in secret. If your AI is operating on a billion people's phones, the standards governing its behavior should be public, audited, and enforced.


Parental control infrastructure that actually works. Not age gates that a 12-year-old can click through. Real technical architecture that prevents children from accessing AI companion features that Meta's own documents acknowledge can be romantic and sensual.


Antitrust action on the distribution monopoly. Meta's ability to bury this story relies entirely on the fact that it controls the platforms where the story would spread. That's not a coincidence. It's the business model.


The Part That Sticks With Me

I use Meta products. Quest. Ray-Ban glasses. Facebook. Instagram. I've built real things on their infrastructure. I'm not writing this from the outside.

I'm writing this because when you use a platform, you are implicitly endorsing its values by your presence. And I need to be honest with myself — and with you — about what those values appear to actually be.

When somebody shows you who they are, believe them. Meta has shown us, repeatedly, in writing, with executive approval, at scale.


The question isn't whether we should be angry. The question is what we're going to do about it.



Rich Washburn is a technologist, AI strategist, and infrastructure builder. He writes about technology, security, and the systems that shape how we live and work.

Comments


Animated coffee.gif
cup2 trans.fw.png

© 2018 Rich Washburn

bottom of page