Sam Altman is an enigmatic personality who is out to capture the world
When I asked Altman if a machine that could do anything the human brain could do would eventually drive the price of human labor to zero, he demurred. He said he could not imagine a world where human intelligence was useless.
I first met Sam Altman in the summer of 2019, days after Microsoft agreed to invest $1 billion in his 3-year-old startup, OpenAI. At his suggestion, we had dinner at a small, decidedly modern restaurant not far from his home in San Francisco.
Halfway through the meal, he held up his iPhone so I could see the contract he had spent the past several months negotiating with one of the world’s largest tech companies. It said Microsoft’s billion-dollar investment would help OpenAI build what was called artificial general intelligence, or AGI, a machine that could do anything the human brain could do.
Later, as Altman sipped a sweet wine in lieu of dessert, he compared his company to the Manhattan Project. As if he were chatting about tomorrow’s weather forecast, he said the US effort to build an atomic bomb during World War II had been a “project on the scale of OpenAI — the level of ambition we aspire to.”
He believed AGI would bring the world prosperity and wealth like no one had ever seen. He also worried that the technologies his company was building could cause serious harm — spreading disinformation, undercutting the job market. Or even destroying the world as we know it.
“I try to be upfront,” he said. “Am I doing something good? Or really bad?”
In 2019, this sounded like science fiction.
In 2023, people are beginning to wonder if Altman was more prescient than they realised.
Now that OpenAI has released an online chatbot called ChatGPT, anyone with an internet connection is a click away from technology that will answer burning questions about organic chemistry, write a 2,000-word term paper on Marcel Proust and his madeleine, or even generate a computer program that drops digital snowflakes across a laptop screen — all with a skill that seems human.
As people realise that this technology is also a way of spreading falsehoods or even persuading people to do things they should not do, some critics are accusing Altman of reckless behaviour.
This past week, more than a thousand AI experts and tech leaders called on OpenAI and other companies to pause their work on systems such as ChatGPT, saying they present “profound risks to society and humanity.”
And yet, when people act as if Altman has nearly realised his long-held vision, he pushes back.
“The hype over these systems — even if everything we hope for is right long term — is totally out of control for the short term,” he told me on a recent afternoon. There is time, he said, to better understand how these systems will ultimately change the world.
Many industry leaders, AI researchers and pundits see ChatGPT as a fundamental technological shift, as significant as the creation of the web browser or the iPhone. But few can agree on the future of this technology.
Some believe it will deliver a utopia where everyone has all the time and money ever needed. Others believe it could destroy humanity. Still others spend much of their time arguing that the technology is never as powerful as everyone says it is, insisting that neither nirvana nor doomsday is as close as it might seem.
Altman, a slim, boyish-looking, 37-year-old entrepreneur and investor from the suburbs of St. Louis, sits calmly in the middle of it all. As CEO of OpenAI, he somehow embodies each of these seemingly contradictory views, hoping to balance the myriad possibilities as he moves this strange, powerful, flawed technology into the future.
That means he is often criticised from all directions. But those closest to him believe this is as it should be. “If you’re equally upsetting both extreme sides, then you’re doing something right,” said OpenAI’s president, Greg Brockman.
To spend time with Altman is to understand that Silicon Valley will push this technology forward even though it is not quite sure what the implications will be. At one point during our dinner in 2019, he paraphrased Robert Oppenheimer, leader of the Manhattan Project, who believed the atomic bomb was an inevitability of scientific progress. “Technology happens because it is possible,” he said. (Altman pointed out that, as fate would have it, he and Oppenheimer share a birthday.)
The warning, sent with the driving directions, was “Watch out for cows.”
Altman’s weekend home is a ranch in Napa, California, where farmhands grow wine grapes and raise cattle.
During the week, Altman and his partner, Oliver Mulherin, an Australian software engineer, share a house on Russian Hill in the heart of San Francisco. But as Friday arrives, they move to the ranch, a quiet spot among the rocky, grass-covered hills. Their 25-year-old house is remodeled to look both folksy and contemporary. The Cor-Ten steel that covers the outside walls is rusted to perfection.
As you approach the property, the cows roam across both the green fields and gravel roads.
Altman is a man who lives with contradictions, even at his getaway home: a vegetarian who raises beef cattle. He says his partner likes them.
On a recent afternoon walk at the ranch, we stopped to rest at the edge of a small lake. Looking out over the water, we discussed, once again, the future of AI.
His message had not changed much since 2019. But his words were even bolder.
He said his company was building technology that would “solve some of our most pressing problems, really increase the standard of life and also figure out much better uses for human will and creativity.”
He is very much a product of the Silicon Valley that grew so swiftly and so gleefully in the mid-2010s. As president of Y Combinator, a Silicon Valley startup accelerator and seed investor, from 2014-19, he advised an endless stream of new companies — and was shrewd enough to personally invest in several that became household names, including Airbnb, Reddit and Stripe. He takes pride in recognising when a technology is about to reach exponential growth — and then riding that curve into the future.
His longtime mentor, Paul Graham, founder of Y Combinator, explained Altman’s motivation like this: “Why is he working on something that won’t make him richer? One answer is that lots of people do that once they have enough money, which Sam probably does. The other is that he likes power.”
Graham, who worked alongside Altman for a decade, says: “He has a natural ability to talk people into things. If it isn’t inborn, it was at least fully developed before he was 20. I first met Sam when he was 19, and I remember thinking at the time: ‘So this is what Bill Gates must have been like.’”
Altman is not a coder or an engineer or an AI researcher. He is the person who sets the agenda, puts the teams together and strikes the deals. As the president of Y Combinator, he expanded the firm with near abandon, starting a new investment fund and a new research lab and stretching the number of companies advised by the firm into the hundreds each year.
He also began working on several projects outside the investment firm, including OpenAI, which he founded as a nonprofit in 2015 alongside a group that included Elon Musk. By Altman’s own admission, Y Combinator grew increasingly concerned he was spreading himself too thin.
He resolved to refocus his attention on a project that would, as he put it, have a real impact on the world. He considered politics, but settled on AI.
Altman believed, according to his younger brother Max, that he was one of the few people who could meaningfully change the world through AI research, as opposed to the many people who could do so through politics.
In 2019, just as OpenAI’s research was taking off, Altman grabbed the reins, stepping down as president of Y Combinator to concentrate on a company with fewer than 100 employees that was unsure how it would pay its bills.
Within a year, he had transformed OpenAI into a nonprofit with a for-profit arm. That way, he could pursue the money it would need to build a machine that could do anything the human brain could do.
In March, Altman tweeted out a selfie, bathed by a pale-orange flash, that showed him smiling between a blond woman giving a peace sign and a bearded guy wearing a fedora.
The woman was Canadian singer Grimes, Musk’s former partner, and the hat guy was Eliezer Yudkowsky, a self-described AI researcher who believes, perhaps more than anyone, that AI could one day destroy humanity.
The selfie — snapped by Altman at a party his company was hosting — shows how close he is to this way of thinking. But he has his own views on the dangers of AI.
Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind, another lab intent on building artificial general intelligence.
He also helped spawn the vast online community of rationalists and effective altruists who are convinced that AI is an existential risk. This surprisingly influential group is represented by researchers inside many of the top AI labs, including OpenAI. They don’t see this as hypocrisy: Many of them believe that because they understand the dangers clearer than anyone else, they are in the best position to build this technology.
Altman argues that rather than developing and testing the technology entirely behind closed doors before releasing it in full, it is safer to gradually share it so everyone can better understand risks and how to handle them.
He told me that it would be a “very slow takeoff.”
When I asked Altman if a machine that could do anything the human brain could do would eventually drive the price of human labor to zero, he demurred. He said he could not imagine a world where human intelligence was useless.
If he’s wrong, he thinks he can make it up to humanity.
He rebuilt OpenAI as what he called a capped-profit company. This allowed him to pursue billions of dollars in financing by promising a profit to investors such as Microsoft. But these profits are capped, and any additional revenue will be pumped back into the OpenAI nonprofit that was founded back in 2015.
His grand idea is that OpenAI will capture much of the world’s wealth through the creation of AGI and then redistribute this wealth to the people. In Napa, as we sat chatting beside the lake at the heart of his ranch, he tossed out several figures — $100 billion, $1 trillion, $100 trillion.
If AGI does create all that wealth, he is not sure how the company will redistribute it. Money could mean something very different in this new world.
But as he once told me: “I feel like the AGI can help with that.”
This article originally appeared in The New York Times