View all on-demand sessions from the Intelligent Security Summit here.
I’m not just another journalist writing a column about how I tried out Microsoft Bing’s AI chatbot last week. No really.
I’m not another reporter telling the world how Sydneythe internal code name of Bing’s AI chat mode, made me feel all the feelings until it drove me crazy and I realized I might not need help searching the web if my new friendly copilot turns on me and threatens me with destruction and a devil emoji.
No, I have not tested the new Bing. My husband did. He asked the chatbot if God made Microsoft; whether it remembered that he owed my husband five dollars; and the downsides of Starlink (to which it suddenly replied, “Thanks for this conversation! I’ve reached my limit, can you click “New Topic”, please?”). He had a great time.
From awe-inspiring reactions and epic meltdowns to AI chatbot limits
But honestly, I didn’t feel like riding along in what turned out to be a predictable up-and-down generative AI news wave that was perhaps even faster than usual.
A week ago, Microsoft announced that a million people had been added to the waiting list for the AI-powered new Bing.
On Wednesday, many of those impressed with Microsoft’s AI chatbot debut last week (including Satya Nadella’s statement that “the race starts today”) were less than impressed with Sydney’s epic meltdowns — including Kevin Roose of the New York Times, WHO wrote that he was “deeply upset” by a long conversation with the Bing AI chatbot that led to him “declaring his love” for him.
On Friday, Microsoft curbed in Sydney, limiting the Bing chat to five replies to “prevent the AI from getting really weird.”
“Who’s a Good Bing?”
Instead, I spent part of the past week in some deep thoughts (and tweets) about my own response to the Bing AI chats published by others.
For example, in response to a Washington Post article claiming the Bing bot told its reporter that it could “feel and think things,” Melanie Mitchell, a Santa Fe Institute professor and author of Artificial intelligence: a guide for thinking people, tweeted that “this discourse is getting dumber and dumber…Journalists: please stop anthropomorphizing these systems!”
That’s what led me to tweet: “I keep thinking about how hard it is to do that not humanize. The tool is named (Sydney), uses emojis to end each comment, and refers to itself in the first person. We do the same with Alexa/Siri and I also do it with birds, dogs and cats. Is that human error?”
Additionally, I added that asking people to stay away from anthropomorphizing AI was like asking people not to ask Fido, “Who’s a good boy?”
Mitchell referred me to a Wikipedia article on the ELIZA effectnamed after the 1966 chatbot ELIZA that proved successful in eliciting emotional responses from users, and has been defined as having a tendency to anthropomorphize AI.
Are humans wired for the Eliza effect?
But since the Eliza effect is known and real, shouldn’t we assume that humans may have been programmed for it, especially if these tools are designed to encourage it?
See, most of us aren’t Blake Lemoine explaining the feel of our favorite chatbots. I can think critically about these systems and I know what is real and what is not. But even I immediately joked with my husband and said, “Poor Bing! It’s so sad that he doesn’t remember you! I knew it was crazy, but I couldn’t help it. I also knew it was crazy to assigning a gender to a bot, but hey, Amazon assigned a gendered voice to Alexa from the start.
Maybe I should try harder as a reporter – sure. But I wonder if the Eliza effect will always be a major hazard with consumer apps. And less of a problem in actual LLM-powered business solutions. Perhaps a copilot complete with kind words and smiley emojis isn’t the best use case. Don’t know.
Anyway, let’s all remember that Sydney is a stochastic parrot. But unfortunately it is very easy to anthropomorphize a parrot.
Keep an eye on AI regulations and governance
I actually covered other news last week. However, my Tuesday article about what’s considered “a giant leap” in AI governance didn’t seem to get as much attention as Bing. I can’t imagine why.
But as Sam Altman, CEO of OpenAI tweet last weekend being a sign, I feel like it might be worth keeping an eye on AI regulation and governance. Maybe we should pay more attention to that than to the Bing AI chatbot told a user to leave his wife.
Have a nice week everyone. Next topic please!
VentureBeat’s mission is to become a digital city plaza where tech decision makers can learn about transformative business technology and execute transactions. Discover our Briefings.