Big Tech is bad. Big AI will be worse

How history has shown that the concentration of AI development in the hands of two powerful companies will lead to the technology being deployed in ways that will hurt humanity

Read more...
What’s the point of prizes? (Shehzil Malik for The New York Times)

By Daron Acemoglu and Simon Johnson

Published: Wed 14 Jun 2023, 11:01 PM

Last updated: Wed 14 Jun 2023, 11:02 PM

Tech giants Microsoft and Alphabet/Google have seized a large lead in shaping our potentially AI-dominated future. This is not good news. History has shown us that when the distribution of information is left in the hands of a few, the result is political and economic oppression. Without intervention, this history will repeat itself.

In just a few months, Microsoft broke speed records in establishing ChatGPT, a form of generative artificial intelligence that it plans to invest $10 billion into, as a household name. And last month, Sundar Pichai, CEO of Alphabet/Google, unveiled a suite of A.I. tools — including for email, spreadsheets and drafting all manner of text. While there is some discussion as to whether Meta’s recent decision to give away its A.I. computer code will accelerate its progress, the reality is that all competitors to Alphabet and Microsoft remain far behind.

The fact that these companies are attempting to outpace each other, in the absence of externally imposed safeguards, should give the rest of us even more cause for concern, given the potential for AI to do great harm to jobs, privacy and cybersecurity. Arms races without restrictions generally do not end well.

Advertising
Advertising

History has repeatedly demonstrated that control over information is central to who has power and what they can do with it. At the beginning of writing in ancient Mesopotamia, most scribes were the sons of elite families, primarily because education was expensive. In medieval Europe, the clergy and nobility were much more likely to be literate than ordinary people, and they used this advantage to reinforce their social standing and legitimacy.

Literacy rates rose alongside industrialization, although those who decided what the newspapers printed and what people were allowed to say on the radio, and then on television, were hugely powerful. But with the rise of scientific knowledge and the spread of telecommunications came a time of multiple sources of information and many rival ways to process facts and reason out implications. Access to facts about the outside world weakened and ultimately helped to destroy Soviet control over Poland, Hungary, East Germany and the rest of its former sphere of influence.

Starting in the 1990s, the internet offered even lower-cost ways to express opinions. But over time the channels of communication concentrated into a few hands including Facebook, whose algorithm exacerbated political polarization and in some well-documented cases also fanned the flames of ethnic hatred. In authoritarian regimes, such as China, the same technologies have turned into tools of totalitarian control.

With the emergence of AI, we are about to regress even further. Some of this has to do with the nature of the technology. Instead of assessing multiple sources, people are increasingly relying on the nascent technology to provide a singular, supposedly definitive answer. There is no easy way to access the footnotes or links that let users explore the underlying sources.

This technology is in the hands of two companies that are philosophically rooted in the notion of “machine intelligence,” which emphasises the ability of computers to outperform humans in specific activities. Deep Mind, a company now owned by Google, is proud of developing algorithms that can beat human experts at games such as chess and Go.

This philosophy was naturally amplified by a recent (bad) economic idea that the singular objective of corporations should be to maximise short-term shareholder wealth. Combined together, these ideas are cementing the notion that the most productive applications of AI replace humankind. Doing away with grocery store clerks in favour of self-checkout kiosks does very little for the productivity of those who remain employed, for example, while also annoying many customers. But it makes it possible to fire workers and tilt the balance of power further in favour of management.

We believe the AI revolution could even usher in the dark prophecies envisioned by Karl Marx over a century ago. The German philosopher was convinced that capitalism naturally led to monopoly ownership over the “means of production” and that oligarchs would use their economic clout to run the political system and keep workers poor.

Fortunately, Marx was wrong about the 19th-century industrial age that he inhabited. Industries emerged much faster than he expected, and new firms disrupted the economic power structure. Countervailing social powers developed in the form of trade unions and genuine political representation for a broad swath of society. And governments developed the ability to regulate industrial excesses. The result was greater competition, higher wages and more robust democracies.

Today, those countervailing forces either don’t exist or are greatly weakened. Generative A.I. requires even deeper pockets than textile factories and steel mills. As a result, most of its obvious opportunities have already fallen into the hands of Microsoft, with its market capitalization of $2.4 trillion, and Alphabet, worth $1.6 trillion.

At the same time, powers like trade unions have been weakened by 40 years of deregulation ideology (Ronald Reagan, Margaret Thatcher, two Bushes and even Bill Clinton). For the same reason, the US government’s ability to regulate anything larger than a kitten has withered. Extreme polarization, fear of killing the golden (donor) goose or undermining national security means that most members of Congress would still rather look away.

To prevent data monopolies from ruining our lives, we need to mobilise effective countervailing power — and fast.

Congress needs to assert individual ownership rights over underlying data that is relied on to build AI systems. If Big AI wants to use our data, we want something in return to address problems that communities define and to raise the true productivity of workers. Rather than machine intelligence, what we need is “machine usefulness,” which emphasizes the ability of computers to augment human capabilities. This would be a much more fruitful direction for increasing productivity. By empowering workers and reinforcing human decision making in the production process, it also would strengthen social forces that can stand up to big tech companies. It would also require a greater diversity of approaches to new technology, thus making another dent in the monopoly of Big AI.

We also need regulation that protects privacy and pushes back against surveillance capitalism, or the pervasive use of technology to monitor what we do — including whether we are in compliance with “acceptable” behaviour, as defined by employers and how the police interpret the law, and which can now be assessed in real time by AI. There is a real danger that AI will be used to manipulate our choices and distort lives.

Finally, we need a graduated system for corporate taxes, so that tax rates are higher for companies when they make more profit in dollar terms. Such a tax system would put shareholder pressure on tech titans to break themselves up, thus lowering their effective tax rate. More competition would help by creating a diversity of ideas and more opportunities to develop a pro-human direction for digital technologies.

If these companies prefer to remain in one piece, the elevated tax on their profits can finance public goods, particularly education, that will help people cope with new technology and support a more pro-human direction for technology, work and democracy.

Our future should not be left in the hands of two powerful companies that build ever larger global empires based on using our collective data without scruple and without compensation.

This article originally appeared in The New York Times.

Daron Acemoglu and Simon Johnson

Published: Wed 14 Jun 2023, 11:01 PM

Last updated: Wed 14 Jun 2023, 11:02 PM

Recommended for you