Robots are telling us how to live. It was only a matter of time; after all, around 700 million of us are now using ChatGPT on a weekly basis. And, according to data released by the chatbot’s parent company, OpenAI, around 70 per cent of consumer usage is not related to work. In recent months, there have been increasing reports of people forming intimate relationships with their ChatGPT bots. And earlier this month, OpenAI announced plans to allow erotic content on the service.
I know people who use it for everything from medical advice and legal information to helping with Hinge prompts and drafting texts to romantic partners. Now even artists, the very same group whose livelihood is threatened by the broader technology, are fessing up to using the service, with Jennifer Lawrence telling The New Yorker that she used ChatGPT for breastfeeding advice while she was crying. “You’re doing the most amazing thing for your baby,” the bot told her in response, adding, “You’re such a loving mother.”
The interviewer, Jia Tolentino, adds that Lawrence, who has two sons with her husband, Cooke Maroney, found the response “called into question the sincerity of anyone else” who said the same. But Tolentino doesn’t ask Lawrence why she asked ChatGPT anything in the first place, which, frankly, was the thing I found most alarming, because it indicates just how normalised this behaviour has become, when the act of asking a robot for help and emotional support should surely be anything but.
Bear in mind that AI has hardly been kind to celebrities thus far. OpenAI came under fire just a few weeks ago for how its Sora app was recreating videos of dead public figures who have no ability to consent to opt in or out of the service. And yet, Sora can generate very convincing clips of deceased famous people in circumstances they were never in within seconds. “Please, just stop sending me AI videos of Dad,” wrote Zelda Williams on her Instagram stories with regards to videos of her late father, Robin Williams. “Stop believing I wanna see it or that I’ll understand, I don’t and I won’t. If you’re just trying to troll me, I’ve seen way worse. I’ll restrict and move on.”
Elsewhere, there are other concerns around celebrity deepfakes, as well as those surrounding so-called “AI actors”, such as Tilly Norwood, the viral creation of an artificial intelligence talent studio called Xicoia. And yet, here is Lawrence passing a comment about ChatGPT without apparent concerns around any of this.
I’m aware I might be in the minority when I confess that I do not use ChatGPT. Despite almost all of my friends evangelising its benefits to me, I’ve remained stubbornly abstinent. I don’t care that it might make me do things faster – I’m already quite happy with my speed! – nor do I trust it for help with research. I’d rather ask friends for advice, and I find ChatGPT’s known penchant for sycophancy rather nauseating and unhelpful. To me, it all sounds like a giant exercise in getting one’s ego stroked; even the platform’s response to Lawrence illustrates that, offering a series of platitudes that surely amount to the same lowly substance as “you’re doing great, sweetie”.
I’m also concerned about how many people are relying on it for support with their mental health – again, without questioning the potential dangers of leaning on a robot for psychological aid. According to a new report, more than a million people express suicidal intent every week while using ChatGPT. The data, released by OpenAI, followed the case of a teenage boy, Adam Raine, who took his own life after engaging in months of conversations with the service. The company subsequently revealed new parental control features that will allow parents and teenagers to connect their accounts, while a team of human reviewers will be notified if a teenager enters a prompt relating to self-harm.
New features or not, I’m still terrified by the proliferation of ChatGPT. But what terrifies me more is that I feel like not many other people are. How quickly it has been accepted, normalised. At least, that’s how it feels when my friends are flippantly referring to their respective bots, referencing advice and tips they offered them in casual conversation: “My therapist says” has quickly been replaced by “Chat says” – yes, people are abbreviating it to that, which I also find stomach-curling.
Where will all of this lead? To a world in which therapists, partners, and friends have been rendered redundant by effusive AI-generated computer systems? One where, in just a few years, I reckon those systems will become walking, talking, effusive monstrosities? Will we all have robot companions? Will there be AI marriages? And children that are mere projections, so people can experience the feelings of nurturing without actually having to do any nasty nappy changes? And how long until those robots are running companies, governments, and countries? What will remain of humanity as it begins to peter into slow, declining significance?
These might sound like extreme predictions – or the premise of a new season of Black Mirror – but the way we’re talking about AI now makes me worry that they aren’t. Because if celebrities like Lawrence are able to slip into conversation that they asked ChatGPT for medical support during a moment of vulnerability without acknowledging the strangeness of that… well, I fear we may all be doomed. And there’s nothing ChatGPT can say to save us – though I doubt that will stop people from hoping that it might.



0 Comments