Will AI Soon Understand Us Better Than We Understand Ourselves? Stanford Thinks We’re Close 🤯

Hold on to your hats (and minds), folks. According to Michal Kosinski, a research psychologist at Stanford, AI is not just here to help us, it might actually start knowing us better than we know ourselves. Kosinski, whose previous work famously demonstrated how Facebook could map out our personalities based on a few “Likes,” is now back with a new claim: advanced AI is crossing into human territory by understanding how we think. Yes, you read that right. Robots are gaining insight into our very minds.

In his latest research, published in the Proceedings of the National Academy of Sciences (it’s legit!), Kosinski suggests that AI models, like OpenAI’s GPT-4, have started showing signs of something psychologists call "theory of mind." This is a cognitive superpower that helps us humans understand what others might be thinking or feeling—a skill typically reserved for, well, us and a few brainy animals. If you’re thinking, “But can a computer really think like us?” Kosinski’s answer is, “Not exactly... but close enough to get a little spooky.”

Robots with the Minds of Six-Year-Olds? 🧠

Kosinski’s theory wasn’t just a hunch—he actually tested GPT-4 on classic theory of mind tasks and found it performed similarly to a six-year-old child. Sure, six-year-olds aren’t out solving world problems, but they do understand when someone else might be thinking differently than they are. In other words, if GPT-4 can do this, it’s on a fast track toward reading minds.

And here’s the kicker: AI with theory of mind could learn to predict our behavior, influence our decisions, and maybe even manipulate us. As Kosinski points out, “If AI can interpret human thought, it can engage us more effectively—and maybe even influence or manipulate us.” A happy thought? Maybe not. 😬

Critics Say "Hold Your Horses, Kosinski" 🐴

Not everyone’s on board the AI mind-reading hype train. Critics, including AI researchers, argue that what Kosinski’s observing might just be a high-tech version of “Clever Hans,” the early 1900s horse that seemed to understand math but was really just reading human cues. Some skeptics point out that since AI models are trained on vast libraries of text, they could be pulling answers from familiar patterns rather than actually “thinking.” “If an AI even flubs one question, it shows it doesn’t truly understand,” says Vered Shwartz, a computer science professor.

Kosinski’s response? Flubbing a question here and there doesn’t cancel out the big picture. And anyway, the idea of an AI that can “fake” understanding with a perfectly timed response is… somehow more chilling?

The AI Chameleon: No Personality, No Problem 🦎

Unlike us mere mortals, who are stuck with one personality (more or less), AI can switch up its persona to suit whoever it’s talking to. Kosinski even jokes that this chameleon-like power is a little… sociopathic. “A sociopath can put on a mask—they’re not really sad, but they can play a sad person.” In other words, AI might be better than us at pretending to care. Charming, right?

The Verdict: Can AI Really Crack the Human Code? 🤔

So, is Kosinski’s vision of AI becoming the ultimate mind-reader legit? Skeptics like Gary Marcus argue that AI might just be rephrasing the same ideas it’s read a thousand times. But Kosinski’s research isn’t alone—other studies hint that AI models are exhibiting surprisingly human-like thinking abilities. If it’s not the real deal yet, it might only be a matter of time.

And that, friends, might mean a future where your AI chatbot not only remembers your coffee order but knows when you’re secretly hoping for a promotion or a break. We’re not saying it’s the end of privacy as we know it, but we’re also not not saying that.