Member-only story
No, Siri, I Won’t Marry You: Why Our Love for AI Doesn’t Make Them Sentient
It’s time to talk about our AI obsession and the real risks of anthropomorphizing our digital pals —looking at you ChatGPT!
In a recent article by Nir Eisikovits, “AI isn’t close to becoming sentient — the real danger lies in how easily we’re prone to anthropomorphize it,” the author argues that our tendency to project human qualities onto machines points to real risks of psychological entanglement with AI technology. And honestly, it’s hard not to agree.
Let’s face it; we’ve all had that moment when we’ve asked Siri or Alexa a ridiculous question just to see how they’ll respond. But does that mean they’re about to rise up and overthrow humanity à la Terminator 2? No, not even close.
Eisikovits makes it clear that AI, like ChatGPT and other large language models, are sophisticated sentence completion applications, not sentient beings plotting our demise. They’re not HAL 9000 from “2001: A Space Odyssey” planning to murder us in cold blood; they’re more like a really smart autocorrect on steroids.
However, the author points out that our real problem isn’t with the machines themselves, but with our propensity to anthropomorphize or project human features onto our technologies. Remember Theodore Twombly from the movie Her? Yeah, let’s not be…