View all on-demand sessions from the Intelligent Security Summit here.
Today ChatGPT is two months old.
Yes, believe it or not, it has not even been nine weeks since OpenAI launched what it simply described as an “early demo”, part of the GPT-3.5 series – an interactive, conversational model whose dialog format “enables ChatGPT to answer follow-up questions, admit its mistakes, challenge erroneous assumptions and reject inappropriate requests.”
ChatGPT quickly captured the imagination – and feverish excitement – of both the AI community and the general public. Since then, the tool’s capabilities, as well as its limitations and hidden dangers, have become well established, and any hints that would delay its development were quickly quashed when Microsoft announced its plans to invest billions more in OpenAI.
Can anyone keep up and compete with OpenAI and ChatGPT? Every day it seems like contenders, both new and old, are entering the ring. This morning, for example Reuters reported that that Chinese internet search giant Baidu plans to launch an AI chatbot service in March, similar to OpenAI’s ChatGPT.
Here are four top players who might make moves to challenge ChatGPT:
According to a New York Times In last Friday’s article, San Francisco-based startup Anthropic is poised to raise about $300 million in new funding, which could value the company at about $5 billion.
Keep in mind that Anthropic has always had money to burn: Founded in 2021 by several researchers leaving OpenAI, it gained more attention last April when, after less than a year of existence, it suddenly announced a whopping $580 million in funding – which, it turns out, came primarily from Sam Bankman-Fried and the people at FTX, the now-bankrupt cryptocurrency platform accused of fraud. There have been questions about whether that money can be recovered by a bankruptcy court.
Anthropic developed an AI chatbot, Claude — available in closed beta via a Slack integration – which is reportedly similar to ChatGPT and has even shown improvements. Anthropic, which describes itself as “working to build reliable, interpretable and controllable AI systems,” Claude created using a process called “Constitutional AI,” which it says is based on concepts such as beneficence, non-harmfulness and autonomy.
According to an Anthropic article detailing constitutional AIthe process includes a supervised learning phase and a reinforcement learning phase: “This allows us to train a harmless but non-evasive AI assistant to engage with harmful questions by explaining its objections to it.”
In a TIME article two weeks ago, DeepMind’s CEO and co-founder, Demis Hassabis, said that DeepMind is considering releasing its chatbot Sparrow in a “private beta” sometime in 2023. the company could work on strengthening learning-based features like citing resources – something ChatGPT doesn’t have.
DeepMind, the British subsidiary of Google parent company Alphabet, introduced Sparrow in a newspaper in September. It was praised as an important step toward creating more secure, less biased machine learning (ML) systems, thanks to the application of reinforcement learning based on input from human research participants for training.
DeepMind says Sparrow is a “dialogue agent that is useful and reduces the risk of unsafe and inappropriate answers.” The agent is designed to “talk to a user, answer questions, and search the web using Google when it’s useful to look up evidence to back up their answers.”
According to Geoffrey Irving, a security researcher at DeepMind and lead author of the paper Introducing Sparrow.
“We haven’t implemented the system because we think it has a lot of biases and flaws from other types,” Irving told VentureBeat last September. “I think the question is: how do you balance the communication benefits – such as communicating with people – against the disadvantages? I tend to believe in the safety needs of talking to people… I think it aids that in the long run.
You may remember LaMDA from last summer’s “AI Sentience” whirlwind, when Blake Lemoine, a Google engineer, laid off for his claims that LaMDA – short for Language Model for Dialogue Applications – was conscious.
“I honestly believe that LaMDA is a person,” Lemoine shared Wired last June.
But LaMDA is still considered one of ChatGPT’s biggest competitors. Launching in 2021, Google said in a launch blog post that LaMDA’s conversational skills “have been years in the making”.
Like ChatGPT, LaMDA is built Transformerthe neural network architecture that Google Research invented and made open source in 2017. The Transformer architecture “produces a model that can be trained to read many words (say, a sentence or paragraph), notice how those words relate to one another, and then predict which words it thinks will come next.
And like ChatGPT, LaMDA is trained in dialogue. According to Google: “During the workout, [LaMDA] picked up several nuances that distinguish open conversation from other forms of language.”
A New York Times The Jan. 20 article said Google founders Larry Page and Sergey Brin met with corporate executives last month to discuss ChatGPT, which could threaten Google’s $149 billion search business. In a statement, a Google spokeswoman said: “We continue to test our AI technology internally to ensure it is useful and safe, and we look forward to sharing more experiences externally soon.”
What happens when engineers who developed Google’s LaMDA get tired of the Big Tech bureaucracy and decide to move forward on their own?
Well, just three months ago, Noam Shazeer (who was also one of the authors of the original Transformer paper) and Daniel De Freitas launched Character AI, the new AI chatbot technology that allows users to chat and role-play with, well, anyone, living or dead. or fictional characters like Draco Malfoy.
According to The information, Character “has told investors it’s looking to raise a whopping $250 million in new funding, a striking price for a startup with a product still in beta.” According to the report, the technology is currently free to use and Character is “studying how users interact with it before committing to a specific monetization plan.”
In October, Shazeer and De Freitas told the Washington Post that they left Google to “get this technology into as many hands as possible”.
“I thought, ‘Now let’s build a product that can help millions and billions of people,'” Shazeer said. “Especially in the COVID era, there are just millions of people who feel isolated or lonely or need someone to talk to.”
And, as he said Bloomberg last month: “Startups can move faster and launch things.”
VentureBeat’s mission is to become a digital city plaza where tech decision makers can learn about transformative business technology and execute transactions. Discover our Briefings.