Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more
Much has been written in recent months about the dangers of generative AI, and yet all I’ve seen boils down to three simple arguments, none of which reflect the greatest risk I see facing us. Before delving into this hidden danger of generative AI, it’s helpful to summarize the general warnings that have been circulating lately:
- The risk to jobs: Generative AI can now produce human-level work products ranging from illustrations and essays to scientific reports. This will be huge influence the job market, but I believe it’s a manageable risk as job definitions adapt to the power of AI. It will be painful for a while, but no different from how past generations adapted to other work-saving efficiencies.
- Risk of fake content: Generative AI can now create human-grade artifacts widely, including fake and misleading articles, essays, papers and video. Misinformation is not a new problem, but generative AI allows it to be mass-produced at levels never seen before. This is a big risk, but manageable. That’s because fake content can be made identifiable by either (a) mandating watermarking technologies that identify AI content at creation, or (b) deploying AI-based countermeasures trained to identify AI content after the fact.
- Risk from conscious machines: Many researchers worry that AI systems will scale up to a level where they will develop a “will of their own” and will take actions that are against human interests, or even threaten human existence. I think this is a real long-term risk. In fact, I wrote a titled “picture book for adults”. Arrival Spirit a few years ago that explores this danger in simple terms. Still, I don’t believe current AI systems will spontaneously become conscious without major structural improvements to the technology. So while this is a real danger for the industry to focus on, it’s not the most pressing risk I see for us.
What worries me most about the rise of generative AI?
From my perspective, the place where most security experts go wrong, including policymakers, is that they see generative AI primarily as a tool to create traditional content at scale. While the technology is quite adept at producing articles, images, and videos, more importantly generative AI will unleash an entirely new form of media that is highly personalized, fully interactive, and potentially much more manipulative than any form of targeted content. experienced so far.
Welcome to the age of interactive generative media
The most dangerous feature of generative AI is not that it can produce fake articles and videos at scale, but that it can produce interactive and adaptive content that is customized for individual users to maximize persuasive impact. In this context, interactive generative media can be defined as targeted promotional materials that are created or modified in real time to maximize influence objectives based on personal data about the receiving user.
This will transform “targeted influence campaigns” from hail aimed at broad demographics to heat-seeking missiles that can target individuals for optimal effect. And as described below, this new form of media likely comes in two powerful flavors: “targeted generative advertising” and “targeted conversational influence.”
Targeted generative advertising is the use of images, videos and other forms of informational content that look and feel like traditional advertisements, but are personalized in real time for individual users. These ads are created on-the-fly by generative AI systems based on influence objectives from third-party sponsors combined with personal data accessed for the specific user being targeted. The personal data may include the user’s age, gender and educational level, combined with their interests, values, aesthetic sensitivities, purchasing tendencies, political affiliations and cultural biases.
In response to the influence objectives and targeting data, the generative AI will adjust the layout, signature images and promotional messages to maximize effectiveness for that user. Everything down to the colors, fonts, and punctuation can be personalized, along with the age, race, and dress styles of all the people shown in the images. Do you see video clips of urban scenes or rural scenes? Will it be set in the fall or spring? Do you see images of sports cars or family vans? Every detail can be adjusted in real time by generative AI to maximize the subtle impact on you personally.
And because technology platforms can track user engagement, over time the system learns which tactics work best for you, discovering the hair colors and facial expressions that best capture your attention.
If this sounds like science fiction, consider this: both meta And Google recently announced plans to use generative AI in creating online ads. If these tactics get more clicks for sponsors, they become standard practice and an arms race ensues, with all major platforms competing to use generative AI to customize promotional content in the most effective way.
This brings me to focused conversation influencea generative technique in which influence objectives are conveyed interactive conversation instead of traditional documents or videos.
The conversations take place through chatbots (such as ChatGPT and Bard) or through speech-based systems powered by similar large language models (LLMs). Users will encounter this”interlocutorsmany times throughout the day as third party developers will use APIs to integrate LLMs into their websites, apps and interactive digital assistants.
For example, you can visit a website looking for the latest weather forecast, strike up a conversation with a AI agent to request the information. In the process, you may become the target of conversational influence – subtle messages woven into the dialogue with promotional targets.
As conversation computers become commonplace in our lives, the risk of conversation influence will expand massively, as paying sponsors can inject messages into the dialogue that we may not even notice. And like targeted generative ads, the message targets requested by sponsors are used in conjunction with personal information about the intended user to optimize impact.
The data may include the user’s age, gender, and education level, combined with personal interests, hobbies, values, etc., enabling real-time generative dialogue designed to best appeal to that specific person.
Why use conversational influence?
If you’ve ever worked as a salesperson, you probably know that the best way to win over a customer isn’t handing them a brochure, but engaging them in a face-to-face conversation so you can pitch them about the product, hear their concerns and adjust your arguments as necessary. It’s a cyclical process of pitching and adapting that can ‘persuade’ them into a purchase.
While this was a purely human skill in the past, generative AI can now perform these steps, but with more skill and deeper knowledge to draw from.
And while a human salesperson only has one persona, these AI agents will be digital chameleons who can take on any speaking style, from nerdy or folksy to suave or hip, and pursue any sales tactic from befriending the customer to exploiting their fear of missing out. And because these AI agents will be armed with personal details, they could name the right music artists or sports teams to help you strike up a friendly dialogue.
In addition, tech platforms can document how well previous conversations worked to convince you, and learn which tactics are most effective for you personally. Do you respond to logical appeals or emotional arguments? Are you looking for the biggest bargain or the highest quality? Are you guided by time pressure discounts or free add-ons? Platforms will soon learn pull all your strings.
Of course, the great threat to society is not the optimized ability to sell you a pair of pants. The real danger is that the same techniques will be used to foment propaganda and disinformation, coaxing you into false beliefs or extreme ideologies that you might otherwise reject. For example, an interlocutor may be instructed to convince you that a completely safe drug is a dangerous plot against society. And because AI agents have access to an internet full of information, they can select evidence in a way that would overwhelm even the most knowledgeable human being.
This creates an asymmetrical balance of power that often AI manipulation problem in which we humans are at an extreme disadvantage, talking to artificial agents who are highly skilled at addressing us, while being unable to “read” the true intentions of the entities we are talking to.
Unless regulated, targeted generative advertising and targeted conversational influence will be powerful forms of persuasion where users will be outdone by an opaque digital chameleon that offers no insight into its thought process, but is armed with extensive data about our personal preferences, wants and inclinations. and has access to unlimited information to fuel his arguments.
For these reasons, I urge regulators, policymakers and industry leaders to focus on generative AI as a new form of media that is interactive, adaptive, personalized and widely deployable. Without meaningful protection, consumers could be exposed to predatory practices ranging from subtle coercion to outright manipulation.
Louis RosenbergPhD, is an early pioneer of VR, AR and AI and the founder of Immersion Corporation (IMMR: Nasdaq), Microscribe 3D, Outland Research and Unanimous AI.
Data decision makers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
To read about advanced ideas and up-to-date information, best practices and the future of data and data technology, join DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers