Regulation should focus on how AI is used, not whether it can continue to develop. Once the technology is further along, it will become clearer how it should be regulated
The world has been dazzled by sudden major advances in artificial intelligence. But now some prominent and well-placed people are responding with misguided demands to pull the emergency brake.
An open letter calling “on all AI labs to immediately pause for at least six months the training of AI systems” has received thousands of signatures, including those of tech icons like Elon Musk and Steve Wozniak, many CEOs, and prominent scholars. Geoffrey Hinton, one of the pioneers of the “deep learning” methods behind the recent advances, was recently asked by CBS News about AI “wiping out humanity.” And, as always, many commentators fear that AI will eliminate the need for human workers. A 2022 Ipsos survey finds that only around one-third of Americans think that AI-based products and services offer more benefits than drawbacks.
Those calling for a pause emphasize that “generative AI” is different from anything that has come before. OpenAI’s ChatGPT is so advanced that it can convincingly converse with a human, draft essays better than many undergraduates, and write and debug computer code. The Financial Times recently found that ChatGPT (along with Bard, Google’s own experimental chatbot) can tell a joke at least passably well, write an advertising slogan, make stock picks, and imagine a conversation between Xi Jinping and Vladimir Putin.It is understandable that a new technology with such seemingly vast powers would raise concerns. But much of the distress is misplaced. AI’s current detractors tend to understate the pace of technological change that advanced economies have already been living through. In 1970, US employment was roughly evenly divided across occupations, with low-skill, medium-skill, and high-skill jobs accounting for, respectively, 31%, 38%, and 30% of total hours worked. A half-century later, middle-skill employment has fallen by an astonishing 15 percentage points.This change was largely the result of technological advances that allowed robots and software to perform tasks previously carried out by manufacturing workers and clerks. The hollowing out of the middle class is one of the most important economic developments in living memory. It has transformed life in the Rust Belt and in offices across the country, with profound effects on American society and politics.
Even the newest technology is not as new as it seems. Chatbots and virtual assistants were commonplace before ChatGPT captured headlines. While your bank’s online customer-service assistant and your phone’s autocomplete function cannot pass the Turing test, both use natural-language processing to try to converse with you, just like ChatGPT. My kids try to talk to our Amazon Alexa the same way they try to talk to human beings.
Those who are worried enough about AI to advocate slamming on the brakes are likely overstating the speed with which it will transform the economy. As impressive as it is, ChatGPT gets a lot wrong. When I entered the query, “Please let me know a few articles Michael Strain has written about economics,” it came back with five articles. All seemed plausible, but I didn’t write any of them. For hospitals, law firms, newspapers, think tanks, universities, government agencies, and many other institutions, such errors will never be acceptable.
The speed of the transformation will also be limited by barriers within businesses. Attorneys tell me that they don’t want their firms using these technologies because they cannot risk releasing confidential information online. The same will be true, for example, of hospitals and patient data. AI providers will create enterprise solutions. But if an AI solution cannot be trained on data from other firms in the same industry, will it be as powerful and useful as optimists suggest? And, as a general matter, it takes longer than people think for businesses to find ways to put new technologies to productive use.The open letter calls for a six-month pause to allow policymakers and regulators to catch up. But regulators are always playing catch-up, and if the biggest concerns about AI are valid, a six-month pause would be of little help. Moreover, if those concerns are indeed overblown, pausing could do lasting damage by undermining US competitiveness or ceding the field to less responsible actors. The letter argues that if a pause “cannot be enacted quickly, governments should step in and institute a moratorium.” Good luck getting China to comply with that.Of course, there are times when governments would want to halt a technology’s development. But this is not one of them. Regulation should focus on how AI is used, not whether it can continue to develop. Once the technology is further along, it will become clearer how it should be regulated. Is there a chance that AI will “wipe out” humanity? A tiny one, I suppose. But generative AI would hardly be the first technology to imply that risk.If there is one thing the doomsayers get right, it is that generative AI has the potential to affect large swaths of the economy – like electricity and the steam engine before it. I would not be surprised if AI eventually became as important as the smartphone or the web browser, with all that that entails for workers, consumers, and existing business models.The right response to economic disruption is not to stop the clock. Rather, policymakers should focus on finding ways to increase participation in economic life. Can earnings subsidies be better used to make work more financially attractive for people without college degrees? Can community colleges and training programs build skills that allow workers to use AI to increase their own productivity? What policies and institutions are standing in the way of greater economic participation?
We must remember that creative destruction does not only destroy. It also creates, often in powerful and unexpected ways. Our future with AI will have storm clouds. But overall, it will be bright.
- Michael R. Strain is Director of Economic Policy Studies at the American Enterprise Institute. - Project Syndicate