ROGUE INTELLIGENCE
A Review of Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, by Karen Hao
If you’re like me, the San Francisco-based artificial intelligence company OpenAI has already come to your attention in fragments, like a TV show viewed through random episodes. Maybe you’ve heard the name of OpenAI’s founder Sam Altman and were aware of his collapsed partnership with Elon Musk. Perhaps you knew the company had been founded, with a staggering one-billion-dollar funding, to achieve Artificial General Intelligence—AI smarter than everyone, at everything—and to do so without ‘misalignment,’ that pesky unknown variable that could mean human extinction by a rogue intelligence. It’s possible you heard of Altman’s brief ouster in late 2023, an overthrow attempt that, at the time, looked similar to one of those failed coups, like Russia in 1991 or Turkey in 2016, that end up strengthening their target. In Karen Hao’s astonishing new book, Empire of AI, all is made clear. If you were waiting for a primer on what could be the most consequential company in history, wait no more.
By covid time, Microsoft had replaced Musk, and three in-house factions seemingly held each other in check. Research made miracles, Applied sold miracles, and Safety made sure the miracles didn’t destabilize humanity. These factions were themselves public-facing cover for a more basic division within the company: those who pump the brakes (‘Doomers’) versus those who floor the gas (‘Boomers’). Viewed in hindsight, it becomes sickeningly clear that the Boomers have always held homefield advantage. Any delay in product rollouts meant losing ground to competitors. Every fast rollout could justify itself by providing user feedback, crucial to building the next version.
Two months into the pandemic, OpenAI released the third edition of its Generative Pre-trained Transformer. GPT3 gave the company its first runaway hit, a product so powerful its engineers watched the rollout light up the globe, time zone by time zone. It also marked a colossal backslide in scientific transparency and data quality. As proof of concept, GPT3 marked AI’s version of the Manhattan Project’s Trinity test, the astonishing success that birthed a potentially existential arms race. OpenAI had long feared “waking up Google,” with its vastly larger infrastructure. GPT3 woke up everyone. It took ChatGPT to ratify the breakneck speed with which every competitor would have to work. Robust scientific integrity does not always thrive at such velocity. GPT-4’s release in early 2023, Hao argues, marked an “industrywide shift from peer-reviewed to PR-reviewed.”
The sheer scale of tech-bro douchebag futurism is a thing to marvel. AI remains plagued by ‘hallucinations,’ byproducts of the Mad Cow tendency of learning machines to ingest their own data (author Jathan Sadowski calls this ‘Habsburg AI,’ after the European dynasty felled by its own inbreeding). Turns out tech titans can also get high on their own supply. To listen to Sam Altman speak publicly is to hear a man who does not receive pushback. His use of the word ‘like’ in nearly every sentence diverts attention from the garbled nonsense of the sentences themselves, endless torrents of buzzwords that would have almost any other speaker diagnosed with bipolar mania. When asked about the unrivaled environmental costs of achieving Artificial General Intelligence, the answer is the same given for curing cancer and solving global inequality: AGI will fix all. And Altman’s answers tend to become industry answers.
If you’re squeamish about the destruction of everything, as I apparently am, the environmental details hurt. The Earthbound machinery of ‘the Cloud’ is an open secret, but it takes Hao to break down exactly how much extra stress the pursuit of AGI has placed on the world’s resources in just the last decade. The ravenousness of artificial intelligence, in all its guises, dwarfs the hungers of the Internet. The same fresh water that humans need for survival is also the same kind of water AI data centers need to cool server arrays measured, quite literally, in hundreds of football fields. Because these data centers need to run 24/7, they get priority over human needs. During recent hurricanes in Florida and Texas, these centers kept working even as nearby hospitals evacuated patients. These hubs burn electricity at rates that feel satirical. One planned campus will require as much energy as New York City.
The human costs are somehow more wrenching. People excrete vast datasets, ‘data dumps,’ which are ‘scraped’ from the web and used to train machine learning. The impression is of a hazy wrong done to humanity collectively, an attack on intellectual property without specific victims. In fact, the victimization is outsourced to the global south. Hao travels to Kenya and Venezuela and Chile and meets with remote workers far down the food chain, people trying to eke out a baseline sustenance working in data annotation, or RLHF (Reinforcement Learning from Human Feedback), or even old-fashioned content moderation, the low-pay, high-horror grunt work undergirding all of social media. In this telling, AI is the 21st century version of the cotton gin, another technological leap that made life vastly worse for capitalism’s lowest rungs. This isn’t some leftist screed from an obscure blog. Hao writes for the Wall Street Journal. Her critique is civil, exhaustive, and devastating. At one point she gets granular about the microbial ecosystem of Chile’s Atacama Desert, imperiled by an AI infrastructure ravenous for copper and land and fresh water. The author correctly calls this new thing by its proper name: imperialism.
Four months before his brief ouster, Altman took his employees to watch the film Oppenheimer. From day one, he had been comparing his company to the Manhattan Project, even in meetings with new hires. The regret of Oppenheimer’s post-bomb life didn’t seem to make an impact on Altman, although he frequently mentioned their shared birthday. OpenAI rented out a San Francisco theater for this showing, so employees could watch Oppenheimer without having to suffer the rabble. The company takes pains to insulate its staff from the city they inhabit. Employees are urged to take rideshares to and from the office instead of navigating past the tent cities lining the sidewalks. Hao pounces on “the utter contradiction of declaring the problem of creating and managing beneficial AGI possible, but San Francisco’s housing crisis too tough to tackle.”
One scene in Oppenheimer seems to directly address Sam Altman down in the audience. Germany’s surrender, two months before the Trinity test, abruptly voids the Manhattan Project’s entire premise: to get nukes before Hitler. Los Alamos staff voice moral qualms about continuing. It falls to Robert Oppenheimer (Cillian Murphy) to argue that the bomb will still save American soldiers, a meager motive compared to saving civilization from a nuclear Nazi Germany. In other words, nukes weren’t yet inevitable. They only became inevitable after someone actually set one off. Humanity sped past one final off-ramp.
Inevitability is part of OpenAI’s DNA. Its premise, ‘the Mission,’ has taken many forms in OpenAI’s improbable decade. There has only ever been one through line: inevitability. And AI is quite obviously inevitable in 2025. But was AI inevitable ten years ago? The author describes a recent meeting with a Chinese AI researcher who demolishes this myth. Up until ChatGPT, there was never any scenario in which China was a direct competitor with OpenAI. No Chinese AI talent, no matter how talented, could ever have marshalled one billion dollars for an unproven technology. Here is where Hao’s sledgehammer swings with full force.
It was specifically OpenAI, with its billionaire origins, unique ideological bent, and Altman’s singular drive, network and fundraising talent that created a ripe combination for its particular vision to emerge and take over... In other words, everything OpenAI did was the opposite of inevitable. The explosive global costs of its massive deep learning models, and the perilous race it sparked across the industry to scale such models to planetary limits, could only have ever arisen from the one place it actually did.
So who is this guy? Altman only gets fuzzier the more we pull in. His schmoozing skills front a cipher. For many within the company, this unknowability slowly curdled into something a bit more sinister, a range of incidents that, when compiled, added to a perplexing serial dishonesty and manipulation that consistently signaled a weakening of Safety. The November 2023 ouster attempt seemed a last-ditch effort to return the company to its original ‘Doomer’ position. When Hao refers to the ‘misalignment’ within OpenAI caused by its CEO’s own behavior, it becomes clear: Sam Altman is the rogue intelligence. The rest of us are at the mercy of whatever lurks in the black box behind his smile.



