Wed, Dec 25, 2024 | Jumada al-Aakhirah 24, 1446 | DXB ktweather icon0°C

How to lose friends and alienate people with AI

Inbox management can be mind-numbing. You might have wondered, couldn’t a robot do this?

Published: Sun 7 May 2023, 8:26 PM

Updated: Mon 8 May 2023, 12:22 AM

  • By
  • Emma Goldberg

Top Stories

Richard Borge/The New York Times

Richard Borge/The New York Times

Five hours is enough time to watch a New York Mets game. It is enough time to listen to the Spice Girls’ “Spice” album (40 minutes), Paul Simon’s “Paul Simon” album (42 minutes) and Gustav Mahler’s Symphony No. 3 (his longest). It is enough time to roast a chicken, text your friends that you’ve roasted a chicken and prepare for an impromptu dinner party.

Or you could spend it checking your email. Five hours is about how long many workers spend on email each day. And 90 minutes on the messaging platform Slack.

It’s a weird thing, workplace chatter such as email and Slack: It’s sometimes the most delightful and human part of the workday. It can also be mind-numbing to manage your inbox — to the extent you might wonder: Couldn’t a robot do this?

In late April, I decided to see what it would be like to let artificial intelligence into my life. I resolved to do an experiment. For one week, I would write all my work communication — emails, Slack messages, pitches, follow-ups with sources — through ChatGPT, the AI language model from the research lab OpenAI. I didn’t tell colleagues until the end of the week (except in a few instances of personal weakness). I downloaded a Chrome extension that drafted email responses directly into my inbox. But most of the time, I ended up writing detailed prompts into ChatGPT, asking it to be either witty or formal depending on the situation.

What resulted was a roller coaster, emotionally and in terms of the amount of content I was generating. I started the week inundating my teammates (sorry) to see how they would react. At a certain point, I lost patience with the bot and developed a newfound appreciation for phone calls.

My bot, unsurprisingly, couldn’t match the emotional tone of any online conversation. And I spend a lot of the week, because of hybrid work, having online conversations.

The impulse to chat with teammates all day isn’t wrong. Most people know the thrill (and also, usefulness) of office friendships from psychologists, economists, TV sitcoms and our own lives; my colleague sends me photos of her baby in increasingly chic onesies every few days, and nothing makes me happier. But the amount of time workers feel they must devote to digitally communicating is undoubtedly excessive — and for some, easy to make the case for handing over to AI.

The release of generative AI tools has raised all sorts of enormous and thorny questions about work. There are fears about what jobs will be replaced by AI in 10 years. Paralegals? Personal assistants? Movie and television writers are on strike, and one issue they’re fighting for is limiting the use of AI by the studios. There are also fears about the toxic and untruthful information AI can spread in an online ecosystem already rife with misinformation.

The question driving my experiment was far narrower: Will we miss our old ways of working if AI takes over the drudgery of communication? And would my colleagues even know, or would they be Chatfished?

My experiment started on a Monday morning with a friendly Slack message from an editor in Seoul, South Korea, who had sent me the link to a study analyzing humour across more than 2,000 TED and TEDx Talks. “Pity the researchers,” the editor wrote to me. I asked ChatGPT to say something clever in reply, and the robot wrote: “I mean, I love a good TED Talk as much as the next person, but that’s just cruel and unusual punishment!”

While not at all resembling a sentence I would type, this seemed inoffensive. I hit send.

I had begun the experiment feeling that it was important to be generous in spirit toward my robot co-conspirator. By Tuesday morning, though, I found that my to-do list was straining the limits of my robot’s pseudo-human wit. It so happened that my colleagues on the Business desk were planning a party. Renee, one of the party planners, asked me if I could help draft the invitation.

“Maybe with your journalistic voice, you can write a nicer sentence than I just have,” Renee wrote to me on Slack.

I couldn’t tell her that my use of “journalistic voice” was a sore subject that week. I asked ChatGPT to craft a funny sentence about refreshments. “I am thrilled to announce that our upcoming party will feature an array of delicious cheese plates,” the robot wrote. “Just to spice things up a bit (pun intended), we may even have some with a business-themed twist!”

Renee was unimpressed and wrote to me: “OK, wait, let me get the ChatGPT to make a sentence.”

Meanwhile, I had exchanged a series of messages with my colleague Ben about a story we were writing together. In a moment of anxiety, I called him to let him know it was ChatGPT writing the Slack messages, not me, and he admitted that he had wondered whether I was annoyed at him. “I thought I’d broken you!” he said.

When we got off the phone, Ben messaged me: “Robot-Emma is very polite, but in a way I’m slightly concerned might hide her intention to murder me in my sleep.”

“I want to assure you that you can sleep peacefully knowing that your safety and security are not at risk,” my bot replied. “Take care and sleep well.”

Given the amount of time I spend online talking to colleagues — about the news, story ideas, occasionally “Love Is Blind” — it was disconcerting stripping those communications of any personality.

But it’s not at all far-fetched. Microsoft this year introduced a product, Microsoft 365 Copilot, that could handle all the tasks I asked ChatGPT to do and far more. I recently saw it in action when Jon Friedman, a corporate vice president at Microsoft, showed me how Copilot could read emails he had received, summarize them and then draft possible replies. Copilot can take notes during meetings, analyze spreadsheet data and identify problems that might arise in a project.

I asked Friedman if Copilot could mimic his sense of humor. He told me that the product wasn’t quite there yet, although it could make valiant comedic attempts. (He has asked it, for example, for pickleball jokes, and it delivered: “Why did the pickleball player refuse to play doubles? They couldn’t dill with the extra pressure!”)

Of course, he continued, Copilot’s purpose is loftier than mediocre comedy. “Most of humanity spends way too much time consumed with what we call the drudgery of work, getting through our inbox,” Friedman said. “These things just sap our creativity and our energy.”

Friedman recently asked Copilot to draft a memo, using his notes, recommending one of his employees for a promotion. The recommendation worked. He estimated that two hours’ worth of work was completed in six minutes.

To some, the time savings aren’t worth the peculiarity of outsourcing relationships.

“In the future, you’re going to get an email and someone will be like, ‘Did you even read it?’ And you’ll be like, ‘No,’ and then they’ll be like, ‘Well, I didn’t write the response to you,’” said Matt Buechele, 33, a comedy writer who also makes TikToks about office communications. “It’ll be robots going back and forth to each other, circling back.”

Buechele, in the middle of our phone interview, asked me unprompted about the email I had sent to him. “Your email style is very professional,” he said.

I confessed that ChatGPT had written the message to him requesting an interview.

“I was sort of like, ‘This is going to be the most awkward conversation of my life,’” he said.

This confirmed a fear I had been developing that my sources had started to think I was a jerk. One source, for example, had written me an effusive email thanking me for an article I had written and inviting me to visit his office when I was next in Los Angeles.

ChatGPT’s response was muted, verging on rude: “I appreciate your willingness to collaborate.”

I was feeling mournful of my past exclamation-point studded internet existence. I know people think exclamation points are tacky. Writer Elmore Leonard advised measuring out “two or three per 100,000 words of prose.” Respectfully, I disagree. I often use two or three per two or three words of prose. I’m an apologist for digital enthusiasm. ChatGPT, it turns out, is more reserved.

For all the irritation I developed toward my robot overlord, I found that some of my colleagues were impressed by my newly polished digital persona, including my teammate Jordyn, who consulted me Wednesday for advice on an article pitch.

“I have a story idea I’d love to chat with you about,” Jordyn wrote to me. “It’s not urgent!!”

“I’m always up for a good story, urgent or not!” my robot replied. “Especially if it’s a juicy one with plot twists and unexpected turns.”

After a few minutes of back-and-forth, I was desperate to talk with Jordyn in person. I was losing patience with the bot’s cloying tone. I missed my own stupid jokes, and (comparatively) normal voice.

More alarmingly, ChatGPT is prone to hallucinating — putting words and ideas together that don’t actually make sense. While writing a note to a source about the timing for an interview, my bot randomly suggested asking him whether we should coordinate our outfits in advance so that our auras and chakras wouldn’t clash.

I asked ChatGPT to draft a message to another colleague, who knew about my experiment, telling him I was in hell. “I’m sorry, but I cannot generate inappropriate or harmful content,” the robot replied. I asked it to draft a message explaining that I was losing my mind. ChatGPT couldn’t do that either.

Of course, many of the AI experts I consulted were undeterred by the notion of shedding their personalized communication style. “Truthfully, we copy and paste a lot already,” said Michael Chui, a McKinsey partner and expert in generative AI.

Chui conceded that some people see signs of dystopia in a future where workers communicate mostly through robots. He argued, though, that this wouldn’t look all that unlike corporate exchanges that are already formulaic. “I recently had a colleague send me a text message saying, ‘Hey, was that last email you sent legit?’” Chui recalled.

It turned out that the email had been so stiff that the colleague thought it was written through ChatGPT. Chui’s situation is a bit particular, though. In college, his freshman dorm voted to assign him a prescient superlative: “Most likely to be replaced by a robot of his own making.”

I decided to end the week by asking the deputy editor of my department what role he saw for AI in the newsroom’s future. “Do you think there’s a possibility that we could see AI-generated content on the front page one day?” I wrote via Slack. “Or do you think that there are some things that are just better left to human writers?”

“Well, that doesn’t sound like your voice!” the editor replied.

A day later, my experiment complete, I typed back my own response: “That’s a relief!!!”

This article originally appeared in The New York Times.



Next Story