I just found out about something that sounds straight out of 1984 but actually happened recently at a school in the UK. Literally, an AI censored Orwell.



In Manchester, a high school decided to use artificial intelligence to review its library. The AI provided a list of 193 books to remove, with reasons for each one. And guess which was on the list: George Orwell's 1984, because it contained "themes of torture, violence, and sexual coercion." Ironically, 1984 is exactly about a government that controls everything, rewrites history, and decides what citizens can read. And here was an AI doing the same thing.

The school librarian thought it was absurd and refused to implement the suggestions. But the administration disagreed. They accused him of introducing inappropriate books and reported him to local authorities. The stress was so intense that he took sick leave and eventually resigned. And worst of all: authorities concluded that he had indeed violated child safety procedures. Those who resisted the AI lost their jobs. Those who accepted it without question faced no consequences.

It later emerged that the school knew perfectly well that everything came from the AI. In internal documents, they wrote that although the categories were generated by AI, "we believe this classification is approximately accurate." Approximately. An institution delegated a fundamental decision to an algorithm it didn’t even understand, and an administrator approved it without reviewing anything.

The same week, something completely different happened on Wikipedia. While a school let AI decide what its students could read, the world’s largest online encyclopedia made the opposite decision: banning AI from writing or rewriting content. The vote was 44 in favor, 2 against. The reason was an AI account called TomWikiAssist that had been automatically creating articles.

The mathematical problem is this: an AI can write an article in seconds, but a human volunteer takes hours verifying facts, sources, and writing. Wikipedia doesn’t have enough editors to review everything an AI might produce. But there’s something deeper: Wikipedia is one of the main data sources for training AI models. AI learns from Wikipedia, then writes new articles for Wikipedia, which in turn trains the next generation of AI. If AI contaminates that data, a cycle is created where contamination amplifies. Data poisons AI, and AI poisons data.

That same week, OpenAI also backtracked. It indefinitely canceled the "adults-only" mode of ChatGPT, a feature Sam Altman had personally announced months earlier. The internal health committee voted unanimously against it. Their concerns were clear: users would develop unhealthy emotional attachment to AI, and minors would find ways to bypass age verification. Someone was more direct: without improvements, this could become a "sexy suicide coach." The age verification system has over 10% error rate. With 800 million weekly active users, that means tens of millions misclassified.

It wasn’t just that. They also removed Sora, the AI video tool, and the instant payment feature of ChatGPT. Altman said they were focusing on the core business and removing "secondary tasks." But OpenAI is preparing to go public. A company striving to list on the stock market and aggressively removing controversial features probably isn’t being entirely honest with itself.

Look, five months earlier, Altman said users should be treated like adults. Five months later, he discovered that even his own company didn’t know what AI could or couldn’t do. Not even the creators of AI have that answer.

Seeing these three events together, the problem becomes obvious: the speed at which AI produces content is no longer in the same scale as how fast humans can review it. That Manchester director didn’t trust AI; he simply didn’t want to spend the time it would take for a librarian to read 193 books. It’s an economic problem. The generation cost is nearly zero. All review costs are borne by humans.

So each institution reacted in the most drastic way: Wikipedia outright banned it, OpenAI shut down product lines. None of these were well-thought-out solutions; they were emergency measures to patch holes. And this is becoming normalized.

AI updates every few months. There’s no coherent international framework about what it can touch. Each institution draws its own internal lines, which are contradictory and uncoordinated. Meanwhile, the gap only widens. AI continues to accelerate. Review staff doesn’t increase. At some point, this will blow up into something much worse than censoring Orwell. And when it does, it will probably be too late to draw the line.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin