One hopes that artificial intelligence someday will overcome human stupidity. But it will never get the chance if we destroy ourselves first.
Since returning from this year’s World Economic Forum meeting in Davos, I have been asked repeatedly for my biggest takeaways. Among the most widely discussed issues this year was artificial intelligence – especially generative AI (GenAI). With the recent adoption of large language models (like the one powering ChatGPT), there is much hope – and hype – about what AI could do for productivity and economic growth in the future.
To address this question, we must bear in mind that our world is dominated far more by human stupidity than by AI. The proliferation of megathreats – each an element in the broader “polycrisis” – confirms that our politics are too dysfunctional, and our policies too misguided, to address even the most serious and obvious risks to our future. These include climate change, which will have huge economic costs; failed states, which will make waves of climate refugees even larger; and recurrent, virulent pandemics that could be even more economically damaging than Covid-19.
Making matters worse, dangerous geopolitical rivalries are evolving into new cold wars – such as between the United States and China – and into potentially explosive hot wars, like those in Ukraine and the Middle East. Around the world, rising income and wealth inequality, partly driven by hyper-globalisation and labor-saving technologies, have triggered a backlash against liberal democracy, creating opportunities for populist, autocratic, and violent political movements.
Unsustainable levels of private and public debt threaten to precipitate debt and financial crises, and we may yet see a return of inflation and stagflationary negative aggregate supply shocks. The broader trend globally is toward protectionism, de-globalization, de-coupling, and de-dollarisation.
Moreover, the same brave new AI technologies that could contribute to growth and human welfare also have great destructive potential. They are already being used to push disinformation, deepfakes, and election manipulation into hyperdrive, as well as raising fears about permanent technological unemployment and even starker inequality. The rise of autonomous weapons and AI-augmented cyber-warfare is equally ominous.
Blinded by the dazzle of AI, Davos attendees did not focus on most of these megathreats. This came as no surprise. The WEF zeitgeist is, in my experience, a counter-indicator of where the world is really heading. Policymakers and business leaders are there to flog their books and spew platitudes. They represent the conventional wisdom, which is often based on a rear-window view of global and macroeconomic developments.
Hence, when I warned, at the WEF’s 2006 meeting, that a global financial crisis was coming, I was dismissed as a doomster. And when I predicted, in 2007, that many eurozone member states would soon face sovereign debt problems, I was verbally browbeaten by Italy’s finance minister. In 2016, when everyone asked me if the Chinese stock-market crash augured a hard landing that would cause a repeat of the global financial crisis, I argued – correctly – that China would have a bumpy but managed landing. Between 2019 and 2021, the faddish topic at Davos was the crypto bubble that went bust starting in 2022. Then the focus shifted to clean and green hydrogen, another fad that is already fading.
When it comes to AI, there is a very good chance that the technology will indeed change the world in the coming decades. But the WEF’s focus on GenAI already seems misplaced, considering that the AI technologies and industries of the future will go far beyond these models. Consider, for example, the ongoing revolution in robotics and automation, which will soon lead to the development of robots with human-like features that can learn and multitask the way we do. Or consider what AI will do for biotech, medicine, and ultimately human health and lifespans. No less intriguing are the developments in quantum computing, which will eventually merge with AI to produce advanced cryptography and cybersecurity applications.
The same long-term perspective also should be applied to climate debates. It is becoming increasingly likely that the problem will not be resolved with renewable energy – which is growing too slowly to make significant difference – or expensive technologies like carbon capture and sequestration and green hydrogen. Instead, we may see a fusion energy revolution, provided that a commercial reactor can be built in the next 15 years. This abundant source of cheap, clean energy, combined with inexpensive desalination and agro-tech, would allow us to feed the ten billion people who will be living on the planet by the end of this century.
Similarly, the revolution in financial services will not be centered around decentralized blockchains or cryptocurrencies. Rather, it will feature the kind of AI-enabled centralized fintech that is already improving payment systems, lending and credit allocation, insurance underwriting, and asset management. Materials science will lead to a revolution in new components, 3D-printing manufacturing, nanotechnologies, and synthetic biology. Space exploration and exploitation will help us save the planet and find ways to create extra-planetary modes of living.
These and many other technologies could change the world for the better, but only if we can manage their negative side effects, and only if they are used to resolve all the megathreats we face. One hopes that artificial intelligence someday will overcome human stupidity. But it will never get the chance if we destroy ourselves first. — Project Syndicate
Nouriel Roubini is Professor Emeritus of Economics at New York University’s Stern School of Business and Chief Economist and Co-Founder of Atlas Capital Team. He is the author of Megathreats: Ten Dangerous Trends That Imperil Our Future, and How to Survive Them (Little, Brown and Company, 2022).