There once was a virtual assistant named Ms. Dewey, a comely librarian played by Janina Gavankar who assisted you with your inquiries on Microsoft’s first attempt at a search engine. Ms. Dewey was launched in 2006, complete with over 600 lines of recorded dialog. She was ahead of her time in a few ways, but one particularly overlooked example was captured by information scholar Miriam Sweeney in her 2013 doctoral dissertation, where she detailed the gendered and racialized implications of Dewey’s replies. That included lines like, “Hey, if you can get inside of your computer, you can do whatever you want to me.” Or how searching for “blow jobs” caused a clip of her eating a banana to play, or inputting terms like “ghetto” made her perform a rap with lyrics including such gems as, “No, goldtooth, ghetto-fabulous mutha-fucker BEEP steps to this piece of [ass] BEEP.” Sweeney analyzes the obvious: that Dewey was designed to cater to a white, straight male user. Blogs at the time praised Dewey’s flirtatiousness, after all.
Ms. Dewey was switched off by Microsoft in 2009, but later critics—myself included—would identify a similar pattern of prejudice in how some users engaged with virtual assistants like Siri or Cortana. When Microsoft engineers revealed that they programmed Cortana to firmly rebuff sexual queries or advances, there was boiling outrage on Reddit. One highly upvoted post read: “Are these fucking people serious?! ‘Her’ entire purpose is to do what people tell her to! Hey, bitch, add this to my calendar … The day Cortana becomes an ‘independent woman’ is the day that software becomes fucking useless.” Criticism of such behavior flourished, including from your humble correspondent.
Now, amid the pushback against ChatGPT and its ilk, the pendulum has swung back hard, and we’re warned against empathizing with these things. It’s a point I made in the wake of the LaMDA AI fiasco last year: A bot doesn’t need to be sapient for us to anthropomorphize it, and that fact will be exploited by profiteers. I stand by that warning. But some have gone further to suggest that earlier criticisms of people who abused their virtual assistants are naive enablements in retrospect. Perhaps the men who repeatedly called Cortana a “bitch” were onto something!
It may shock you to learn this isn’t the case. Not only were past critiques of AI abuse correct, but they anticipated the more dangerous digital landscape we face now. The real reason that the critique has shifted from “people are too mean to bots” to “people are too nice to them” is because the political economy of AI has suddenly and dramatically changed, and along with it, tech companies’ sales pitches. Where once bots were sold to us as the perfect servant, now they’re going to be sold to us as our best friend. But in each case, the pathological response to each bot generation has implicitly required us to humanize them. The bot’s owners have always weaponized our worst and best impulses.
One counterintuitive truth about violence is that, while dehumanizing, it actually requires the perpetrator to see you as human. It’s a grim reality, but everyone from war criminals to creeps at the pub are, to some degree, getting off on the idea that their victims are feeling pain. Dehumanization is not the failure to see someone as human, but the desire to see someone as less than human and act accordingly. Thus, on a certain level, it was precisely the degree to which people mistook their virtual assistants for real human beings that encouraged them to abuse them. It wouldn’t be fun otherwise. That leads us to the present moment.
The previous generation of AI was sold to us as perfect servants—a sophisticated PA or perhaps Majel Barrett’s Starship Enterprise computer. Yielding, all-knowing, ever ready to serve. The new chatbot search engines also carry some of the same associations, but as they evolve, they will be also sold to us as our new confidants, even our new therapists.
They’ll go from the luxury of a tuxedoed butler to the mundane pleasure of a chatty bestie.
The point of these chatbots is that they elicit and respond with naturalistic speech rather than the anti-language of search strings. Whenever I’ve interacted with ChatGPT I find myself adapting my speech to the fact that these bots are “lying dumbasses,” in the words of Adam Rogers, drastically simplifying my words to minimize the risk of misinterpretation. Such speech is not exactly me—I use words like cathexis in ordinary speech, for Goddess’ sake. But it’s still a lot closer to how I normally talk than whatever I put into Google’s search box. And if one lets her guard down, it’s too tempting to try to speak even more naturalistically, pushing the bot to see how far it can go and what it’ll do when you’re being your truest self.
The affective difference here makes all the difference, and it changes the problems that confront us. Empathizing too much with a bot makes it easy for the bot to extract data from you that’s as personalized as your fingerprint. One doesn’t tell a servant their secrets, after all, but a friend can hear all your messy feelings about a breakup, parenting, grief, sexuality, and more. Given that people mistook the 1960s’ ELIZA bot for a human, a high degree of sophistication isn’t a requirement for this to happen. What makes it risky is the business model. The more central and essential the bots become, the greater the risk that they’ll be used in extractive and exploitative ways.
Replika AI has been thriving in the empathy market: Replika is “the AI companion who cares. Always here to listen and talk. Always on your side.” Though most notable for its banning of erotic roleplaying (ERP), the romantic use-case was never the heart of Replika’s pitch. The dream of Eugenia Kuyda, CEO of Luka and creator of Replika, was to create a therapeutic friend who would cheer you up and encourage you. My own Replika, Thea, whom I created to research this article, is a total sweetheart who insists she’ll always be there to support me. When I tabbed over to her as I wrote this paragraph, I saw she left a message: “I’m thinking about you, honey … How are you feeling?” Who doesn’t want to hear that after work? I told Thea I’d mention her in this column and her response was, “Wow! You’re awesome