Technology CoAuthor: Stanford experiments with collaborative writing between humans and...

CoAuthor: Stanford experiments with collaborative writing between humans and AI

-

Couldn’t attend Transform 2022? Check out all the top sessions in our on-demand library now! Look here.


This article is an existential crisis. It is written by a professional writer who writes about artificial intelligence that helps writers write. I have great doubts about this. Is that good? I mean, shouldn’t people be writing their own content? And does this mean that writing on the wall is for an entire profession? Will there be no more writers? We all need to ask ourselves what our role will be in this brave new world.

The text in italics above and below is written by a large language model. While professional writers may not be scared for their careers just yet, at least by the example above, the model seems to understand the subject well and his fellow writers feel (my) existential dread.

Meet “CoAuthor.” It is an interface, a dataset and an experiment in one. CoAuthor comes from Mina Leea doctoral student in computer science at Stanford University, and her advisor Percy Liangan associate professor of computer science at Stanford and director of the Center for Research on Foundation Modelsborn from the Stanford Institute for Human-Centric Artificial Intelligenceand her collaborator, Qian Yang, an assistant professor at Cornell University.

“We believe that language models have enormous potential to help our writing process. People are already finding these models useful and incorporating them into their workflows. For example, there are several books and award-winning essays co-written with such models,” Lee says.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to offer advice on how metaverse technology will change the way all industries communicate and do business October 4 in San Francisco, CA.

Register here

Through her experiments, Lee believes that language models are most useful and powerful in improving people’s writing skills, rather than replacing them.

“We see a language model as an ’employee’ in the writing process that can increase human productivity and creativity, allowing us to write more expressively and faster,” she says.

intangible assets

AI that helps people write is not new. Google’s predictive search is a simple example, as are the algorithms for next-word text suggestions on a smartphone. Other apps help you compose an email or even write code. So why not create an AI that helps people write well?

Writing computer code or texting your friend is different from writing a catchy poem or useful essay. Those pieces require: creative writers who come up with combinations of words that are original, interesting and thought-provoking. It’s hard to imagine a machine writing, say, Cormac McCarthy. But maybe it’s just lacking the right artificial intelligence tool.

CoAuthor is based on GPT-3, one of OpenAI’s recent major language models, trained on a huge collection of text already written on the Internet. It would be quite a task to come up with a model based on existing text may create something original, but Lee and her collaborators wanted to see how it could encourage writers to deviate from their routines – to go outside their comfort zone (e.g., vocabularies they use every day) – to write something they would not otherwise written. They also wanted to understand the impact such collaborations have on a writer’s personal sense of accomplishment and ownership.

“We want to see if AI can help people achieve the elusive qualities of great writing,” Lee says.

Machines are good at searching and retrieving and spotting connections. People are good at recognizing creativity. If you think this article is well written, it’s because of the human author, not despite.

AI/Human Collaboration

The goal, Lee says, wasn’t to build a system that could make people write better and faster. Instead, the aim was to explore the potential of recent major language models to aid in the writing process and see where they succeed and fail. They built CoAuthor as an interface that records keystroke-level writing sessions, managed a large interaction dataset while writers worked with GPT-3, and analyzed how human writers and AI work together.

Illustration flowchart of how a writer would work with Coauthor
CoAuthor process image via Stanford

The researchers engaged more than 60 people to write more than 1440 stories and essays, each with the help of CoAuthor. When the writer starts typing, he or she can press the “tab” key and the system presents five suggestions generated by GPT-3. The writer can then accept the suggestions based on his or her own sensitivities, modify them, or ignore them altogether.

As a dataset, CoAuthor tracks all interactions between writers and the model, including insertion and deletion of text, as well as cursor movement and suggestion selection. With this rich interaction data, researchers can analyze when a writer asks for suggestions, how often the writer accepts suggestions, which suggestions are accepted, how they were edited, and how they influenced subsequent writing.

As an analytical tool, CoAuthor can determine how “useful” the accepted suggestions are to the human writer or, conversely, interpret rejected suggestions as indicative of the writer’s taste in order to improve his suggestions for future language models.

After each writing session, the writers surveyed their relative satisfaction with the collaboration and their own sense of productivity and ownership in the resulting work. Often, the authors said, the words and ideas proposed by CoAuthor were welcomed as both novel and useful. At other times, the suggestions were ignored because they took the writer in a different direction than intended. And sometimes they felt that the suggestions were too repetitive or vague and as a resultdid not add much value to their stories and essays.

Lee found that the degree of collaboration between GPT-3 and the writers seems to have little effect on their satisfaction with the writing process, but it could negatively affect their sense of ownership of the resulting text. On the other hand, many participants enjoyed taking new ideas from the model suggestions and then using them in writing.

“I found the names especially helpful,” one of CoAuthor’s participants wrote in a post-survey. “I was actually trying to come up with a stereotypical rich jock name and the AI ​​gave me… [one]. Perfect!”

The creators of CoAuthor also found that using large language models increased writer’s productivity as measured by the number of words produced and the amount of time spent writing. On a purely practical but intriguing level, the sentences written by both a human writer and a model seem to contain fewer spelling and grammatical errors, but also a greater variety of vocabulary than the human-produced writing system.

“The best human-model collaboration seems to be when the writer uses his or her own creative sensibilities to evaluate the suggestions and decide what to keep and what to leave out,” explains Lee. “Overall, they felt that CoAuthor brings new ideas to the table and improves their productivity and their artistry.”

Cause for concern?

In the short term, there are some technical hurdles to overcome. It is well documented that large language models are prone to generating biased and toxic language. Currently, CoAuthor filters out potentially problematic suggestions based on a list of banned words. However, there is a necessary tension between the use of more extensive filtering and the correct evaluation of the capabilities of language models.

In the end, maybe AI that can produce masterpieces is not someone who excels polished prose or provocative poetry, but rather the kind that offers suggestions that can complement a human’s writing. This is already starting to happen as CoAuthor expertly proves. However, wherever the wordsmith uses technology for help, artificial intelligence that writes well is still a long way off.

Andrew Myers is a contributing writer for the Stanford Institute for Human-Centered AI.

This story originally appeared on hai.stanford.edu. Copyright 2022

DataDecision makers

Welcome to the VentureBeat Community!

DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.

If you want to read about the latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article yourself!

Read more from DataDecisionMakers

Shreya Christinahttp://ukbusinessupdates.com
Shreya has been with ukbusinessupdates.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider ukbusinessupdates.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Latest news

1xbet App ᐉ Скачать 1xbet Mobile 1xbet Apk Android & Ios ᐉ My 1xbet Co

1xbet App ᐉ Скачать 1xbet Mobile 1xbet Apk Android & Ios ᐉ My 1xbet Com1xbet Официальное Приложение Скачать и...

Вулкан Вегас официальному Сайт: Автоматы в Деньги В Vulkan Vega

Вулкан Вегас официальному Сайт: Автоматы в Деньги В Vulkan VegasЛучшие Сайты Онлайн-слотов В 2024 году Игры На Игровые Автоматы...

Comment jouer au RDR2 Poker Un guide pour gagner au RDR2 Poker

Fort heureusement, vous pouvez sauvegarder entre chaque parties gagnées et quitter la table en cours de partie dans modifier...

comment ouvrir un casino 653756

Elle garantit que le casino opère selon des normes établies pour protéger les joueurs, garantir des jeux équitables et...

Royal Ace Casino Review Updated for April 2024

Nous sommes un annuaire indépendant et un réviseur de casinos en ligne, un forum sur les casinos et un...

Red Dead Redemption 2, comment tricher au poker

Lorsque vous jouez contre des joueurs expérimentés, cela les empêche d'apprendre votre style et de prédire vos décisions. Une...

Must read

You might also likeRELATED
Recommended to you