Sat, Dec 28, 2024 | Jumada al-Aakhirah 27, 1446 | DXB ktweather icon0°C

It’s time to tear up big tech’s get-out-of-jail-free card

SEVEN YEARS AGO, I SHOWED THAT FACEBOOK SOLD DISCRIMINATORY ADS. ITS SYSTEM IS STILL BROKEN

Published: Tue 21 Feb 2023, 7:13 PM

Updated: Tue 21 Feb 2023, 7:23 PM

  • By
  • Julia Angwin

Top Stories

(Mark Pernice for The New York Times)

(Mark Pernice for The New York Times)

I still remember the shock I felt when I was able to buy a Facebook ad aimed only at white house hunters — something the Fair Housing Act was designed to prevent — in just minutes. But even more shocking is that it took six years after my test for Meta, Facebook’s parent company, to comply with the act. As of today, the company still has not fully fixed its discriminatory ad system.

A major reason for the delay: Section 230, the notorious snippet of law embedded in the 1996 Telecommunications Act, which Meta and others have successfully used to protect themselves from a broad swath of legal claims.

The law, created when the number of websites could be counted in the thousands, was designed to protect early internet companies from libel lawsuits when their users inevitably slandered one another on online bulletin boards and chat rooms. But since then, as the technology evolved to billions of websites and services that are essential to our daily lives, courts and corporations have expanded it into an all-purpose legal shield that has acted similarly to the qualified immunity doctrine that often protects police officers from liability even for violence and killing.

As a journalist who has been covering the harms inflicted by technology for decades, I have watched how tech companies wield Section 230 to protect themselves against a wide array of allegations, including facilitating deadly drug sales, sexual harassment, illegal arms sales and human trafficking — behaviour that they would have likely been held liable for in an offline context.

This week the Supreme Court will hear arguments in a case that could limit tech companies’ use of the legal shield of Section 230. The family of a victim of a Daesh terrorist shooting in Paris argued that Google’s algorithms should be held responsible for promoting Daesh videos. Google says it is protected by Section 230.

Big tech companies argue that any limitations to the broad immunity they enjoy could break the Internet and crush free speech, while advocates for reform argue that broad immunity incentivises tech companies to underinvest in harm reduction.

But there is a way to keep Internet content freewheeling while revoking tech’s get-out-of-jail-free card: Drawing a distinction between speech and conduct.

In this scenario, companies could continue to have immunity for the defamation cases that Congress intended, but they would be liable for illegal conduct that their technology enables.

Courts have already been heading in this direction by rejecting the use of Section 230 in a case where Snapchat was held liable for its design of a speed filter that encouraged three teenage boys to drive incredibly fast in the hopes of receiving a virtual reward. They crashed into a tree and died.

In its Supreme Court brief, the Biden administration argues in favour of drawing a line between the benign algorithmic sorting that enables popular products like Google search and algorithmic manipulation that can violate the law, such as recommending terrorism-related content. “When an online service provider substantially adds or otherwise contributes to a third party’s information,” it may be held liable, the government argues.

I have seen firsthand how Section 230 enables tech companies to do little to address the harms their technologies can enable. In 2016 the civil rights attorney Rachel Goodman called to tell me that she had been trying unsuccessfully to warn Facebook that advertisers could use its ad targeting algorithms to violate the Fair Housing Act.

With Facebook’s automated ad targeting system, Ms. Goodman told me, advertisers could buy ads that were shown only to audiences Meta had identified as white without anyone being the wiser. To test her claim, my colleague Terry Parris Jr. and I decided to buy an ad. We logged onto Facebook’s ad portal and selected an audience of people interested in buying a house.

We were then offered a drop-down menu with a choice of audiences to exclude from seeing our ad. We chose to exclude three “ethnic affinity” groups: African Americans, Asian Americans and Hispanics. After 15 minutes, our ad was approved.

We immediately deleted our test. We had just witnessed the face of 21st-century discrimination: silent attributes hidden in code. There was no need for a “whites only” label in the ad. Hardly anyone but white people would ever see the ad.

Facebook responded to public pressure by adding language to its fine print notifying advertisers that they were responsible for complying with civil rights laws. It said it would build an algorithm to stop advertisers from exploiting racial categories in housing, employment and credit ads. (The company’s algorithm didn’t address other protected categories in civil rights law, such as age and gender.)

After our article was published, several lawsuits were filed against Facebook alleging violations of the Fair Housing Act. Facebook responded with claims of immunity under Section 230. Its view was that the advertisers alone were liable for any illegality. Historically, courts had agreed. In 2008, for instance, a federal court of appeals ruled that Craigslist was not liable for discriminatory housing ads posted on its website.

Less than a year after Facebook started using a new algorithm, I was able to buy another housing ad targeted at white audiences. Facebook blamed this on a “technical failure” of its new algorithmic system. Soon after, I found dozens of companies using Facebook’s ad targeting system to exclude older people from seeing employment ads. Facebook argued that age-targeting job ads were acceptable when “used responsibly,” despite a federal law prohibiting employers from indicating an age preference in advertising.

In 2019, three years after I purchased that first discriminatory housing ad, Facebook reached a settlement to resolve several legal cases brought by individual job seekers and civil rights groups and agreed to set up a separate portal for housing, employment and credit ads, where the use of race, gender, age and other protected categories would be prohibited. The Equal Employment Opportunity Commission also reached settlements with several advertisers that had targeted employment ads by age.

But later that year, researchers at Northeastern University found that the new portal’s algorithm continued to distribute ads in a biased manner: “Ads for supermarket jobs were shown primarily to women, while ads for jobs in lumber industry were presented mostly to men.”

This is the problem with automated systems. They can create discrimination even when discriminatory variables are removed from their inputs because they often have enough information to make surprisingly accurate inferences.

Meanwhile, reporters at the nonprofit newsroom I founded, The Markup, identified credit card advertisements targeted by age. Facebook said “enforcement is never perfect” and that it would remove the ads we identified. Because of Section 230, Meta kept winning in the courts.

Last year, Meta agreed to yet another settlement, this time with the U.S. Department of Justice. The company agreed to pay a fine of more than $115,000 and to build a new algorithm — just for housing ads — that would distribute such ads in a nondiscriminatory manner. But the settlement didn’t fix any inherent bias embedded in credit, insurance or employment ad distribution algorithms.

And so here we are, seven years after my first purchase, and Meta still hasn’t fully fixed its discriminatory ad system, even as its revenues have quadrupled. As Judge Frank Easterbrook wrote in 2003, Section 230 makes internet providers “indifferent to the content of information they host or transmit” and encourages them to “do nothing.”

Drawing a distinction between speech and conduct seems like a reasonable step toward forcing big tech to do something when algorithms can be proved to be illegally violating civil rights, product safety, antiterrorism and other important laws. Otherwise, without liability, the price of doing nothing will always outweigh the cost of doing something.

This article originally appeared in The New York Times.



Next Story