Technology Why we should be careful how we talk about...

Why we should be careful how we talk about large language models

-

View all on-demand sessions from the Intelligent Security Summit here.


For decades, we’ve personified our devices and applications with verbs like “thinks,” “knows,” and “believes.” And in most cases, such anthropomorphic descriptions are harmless.

But we are entering an era where we need to be careful how we talk about software, artificial intelligence (AI), and especially large language models (LLMs), which have become impressively sophisticated at mimicking human behavior while fundamentally different from the human mind .

It is a serious mistake to mindlessly apply the same intuitions to artificial intelligence systems that we use to interact with each other, warns Murray Shanahan, professor of Cognitive Robotics at Imperial College London and a research scientist at Deep Spiritin a new article titled “Talk about big language models.” And to make the most of the remarkable capabilities that AI systems possess, we need to be aware of how they work and avoid ascribing to them capabilities that they lack.

Read also: OpenAI CEO admits ChatGPT risks. What now? | The AI ​​beat

Event

Intelligent Security Summit on demand

Learn the critical role of AI and ML in cybersecurity and industry-specific case studies. Check out on-demand sessions today.

Look here

Humans vs. LLMs

“It’s amazing how human-like LLM-based systems can be, and they keep getting better. After interacting with them for a while, it’s all too easy to come to see them as entities with minds like ours,” Shanahan told VentureBeat. “But they’re actually more of an alien form of intelligence, and we don’t fully understand them yet. So we have to be careful when we integrate them into human affairs.”

Human language use is an aspect of collective behavior. We acquire language through our interactions with our community and the world we share with them.

“As a baby, your parents and caregivers provided ongoing commentary in natural language as they pointed to things, put or took things in your hands, moved things within your field of vision, played with things together, and so on,” Shanahan said. “LLMs are trained in a very different way, without ever inhabiting our world.”

LLMs are mathematical models that represent the statistical distribution of tokens in a corpus of human-generated text (tokens can be words, parts of words, characters, or punctuation). They generate text in response to a prompt or question, but not in the same way a human would.

Shanahan simplifies interacting with an LLM as such: “Here’s a snippet of text. Tell me how this excerpt could continue. According to your model of human language statistics, which words are likely to come next?

When trained on a large enough number of examples, the LLM can produce correct answers at an impressive rate. Nevertheless, the difference between humans and LLMs is extremely important. For people, different fragments of language can have different relations to the truth. We can tell the difference between fact and fiction, such as Neil Armstrong’s journey to the moon and Frodo Baggins’ return to the Shire. For an LLM that generates statistically probable word sentences, these differences are invisible.

“This is one of the reasons why it’s a good idea for users to repeatedly remind themselves of what

LLMs really do that,” Shanahan writes. And this reminder can help developers avoid the “misleading use of philosophically loaded words to describe the capabilities of LLMs, words like ‘belief’, ‘knowledge’, ‘understanding’, ‘self’ or even ‘consciousness’.”

The fading barriers

When talking about phones, calculators, cars, etc., it usually doesn’t hurt to use anthropomorphic language (for example, “My watch doesn’t realize it’s daylight saving time”). We know that these terms are useful abbreviations for complex processes. However, Shanahan warns, in the case of LLMs, “that’s their power, things can get a little blurry.”

For example, a lot of research has been done on quick tech tricks that can improve the performance of LLMs on complicated tasks. Sometimes adding a simple phrase to the prompt, such as “Let’s think step by step,” can improve the LLM’s ability to perform reasoning and planning tasks. Such results can strengthen “the temptation to see.” [LLMs] because they have human-like features,” warns Shanahan.

But again, we have to consider the differences between reasoning in humans and meta-reasoning in LLMs. For example, if we ask a friend, “Which country is south of Rwanda?” and they reply, “I think it’s Burundi”, we know they understand our intent, our background knowledge and our interests. At the same time, they know our capacity and resources to verify their answer, such as looking at a map or googling the term or asking other people.

However, when you ask the same question to an LLM, that rich context is missing. In many cases, some context is provided in the background by adding bits to the prompt, such as framing it in a script-like framework that the AI ​​was exposed to during training. This increases the likelihood that the LLM will generate the correct answer. But the AI ​​’knows’ nothing about Rwanda, Burundi or their relationship to each other.

“Knowing that the word ‘Burundi’ is likely to follow the words ‘The country south of Rwanda’ is not the same as knowing that Burundi is south of Rwanda,” writes Shanahan.

Careful use of LLMs in real-world applications

As LLMs continue to make progress, as developers we need to be careful about how we build applications on them. And as users, we need to be careful about how we feel about our interactions with them. Reshaping how we think about LLMs and AI in general can have a major impact on the security and robustness of their applications.

The expansion of LLMs may require a shift in how we use familiar psychological terms, such as “believes” and “thinks,” or perhaps the introduction of new words, Shanahan said.

“It may take a long period of interacting and living with these new types of artifacts before we learn how best to talk about them,” Shanahan writes. “Meanwhile, we must try to resist the siren call of anthropomorphism.”

VentureBeat’s mission is to become a digital city plaza where tech decision makers can learn about transformative business technology and execute transactions. Discover our Briefings.

Shreya Christinahttp://ukbusinessupdates.com
Shreya has been with ukbusinessupdates.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider ukbusinessupdates.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Latest news

1xbet Зеркало Букмекерской Конторы 1хбет На следующий ️ Вход и Сайт Прямо тольк

1xbet Зеркало Букмекерской Конторы 1хбет На следующий ️ Вход и Сайт Прямо только1xbet Зеркало на Сегодня Рабочий официальный Сайт...

Mostbet Pakistan ᐉ Online Casino Review Official Website

Join us to dive into an immersive world of top-tier gaming, tailored for the Kenyan audience, where fun and...

Casino Pin Up Pin-up Casino Resmi Sitesi Türkiye Proloq Ve Kayıt Çevrimiçi

ContentPin Up Nə Say Onlayn Kazino Təklif Edir?Pin Up Casino-da Pul Çıxarmaq Nə Miqdar Müddət Alır?Vəsaiti Kartadan Çıxarmaq üçün...

Играть В Авиатора: Самолетик Pin Up

ContentAviator: Son Qumar Oyunu Təcrübəsini AçınMobil Proqram Pin UpPin Up Aviator Nasıl Oynanır?Бонус За Регистрацию В Pin Up?Pin Up...

Pin Up 306 Casino əvvəl Qeydiyyat, Bonuslar, Yukl The National Investo

ContentDarajalarfoydalanuvchilar Pin UpCasino Pin-up Pin-up On Line Casino Resmi Sitesi Türkiye Başlanğıc Ve Kayıt ÇevrimiçPromosyon Və Qeydiyyatdan KeçməkAviator OyunuAviator...

Find Experts to Write My Paper for Me. Just Click a Button Even though you may have many...

Must read

You might also likeRELATED
Recommended to you