Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more
As the whole world knows, the field of artificial intelligence (AI) is developing at lightning speed. Businesses large and small are racing to implement the power of generative AI in new and convenient ways.
I strongly believe in the value of AI in increasing human productivity and solving human problems, but I am also deeply concerned about the unexpected consequences. Like me the San Francisco examiner last week I drew the controversial AI “Pause letteralong with thousands of other researchers to draw attention to the risks associated with large-scale generative AI and to help the public understand that the risks are currently evolving faster than efforts to contain them.
It’s been less than two weeks since that letter went public, and Meta has already made an announcement about a planned use of generative AI that I’m particularly concerned about. Before getting into this new risk, let me say that I am a fan of the AI work done at Meta and am impressed with their progress on many fronts.
For example, this week Meta announced a new generative AI called the segment each model (SAM), which I think is very useful and important. It allows processing of any image or video frame in near real time and identifies each of the different objects in the image. We take this capability for granted because the human brain is remarkably good at segmenting what we see, but now with the SAM model, computer applications can perform this function in real time.
Why is SAM important? As a researcher who started working on “mixed reality” systems back in 1991 before that phrase was even coined, I can tell you that the ability to identify objects in a field of view in real time is a real milestone. It will enable magical user interfaces augmented/mixed reality environments that were never feasible before.
For example, you can simply look at a real object in your field of vision, blink or nod or make some other obvious gesture, and immediately receive information about that object or communicate with it remotely if it is electronically enabled. Such an can based interactions have been a goal of mixed reality systems for decades, and this new generative AI technology can make it work even when there are hundreds of objects in your field of view, and even when many of them are partially obscured. To me, this is a critical and important use of generative AI.
Potentially dangerous: AI-generated ads
On the other hand, Meta CTO Andrew Bosworth said last week that the company plans to use generative AI technologies to create targeted ads that are customized for certain audience. I know this sounds like a convenient and potentially harmless use of generative AI, but I must point out why this is a dangerous direction.
Generative tools are now so powerful that if companies are allowed to use them to customize ad images for targeted “audiences,” we can expect those audiences to be narrowed down to individual users. In other words, advertisers can generate custom ads (images or videos) produced on-the-fly by AI systems to optimize their effectiveness for you personally.
As an “audience of one,” you’ll soon discover that targeted ads are tailored based on data collected about you over time. After all, the generative AI used to produce ads could have access to which colors and layouts are most effective at grabbing your attention, and which types of human faces you find most trustworthy and attractive.
The AI may also have data indicating what types of promotional tactics have worked effectively on you in the past. With the scalable power of generative AI, advertisers can deploy custom images and videos press your buttons with extreme precision. In addition, we must assume that similar techniques will be used by malicious parties to spread propaganda or misinformation.
Convincing impact on individual targets
Even more troubling, researchers have already discovered techniques that can be used to make images and videos highly appealing to individual users. For example, studies have shown that mixing aspects of a user’s own facial features into computer-generated faces can make that user feel more “benevolent” toward the content being transferred.
For example, research at Stanford University shows that when a user’s own characteristics are mixed with the face of a politician, individuals are 20% more likely to vote for the candidate as a result of the image manipulation. Other research suggests that humans are involved actively imitate a user’s own expressions or gestures can also have more influence.
Unless regulated by policymakers, we can expect generative AI ads to likely be deployed using a variety of techniques that maximize their persuasive impact on individual targets.
As I said at the top, I firmly believe that AI technologies, including generative AI tools and techniques, will have remarkable benefits that increase human productivity and solve human problems. Yet we must install fences that prevent these technologies from being used in deceptive, coercive, or manipulative ways that challenge human agency.
Louis Rosenberg is a pioneering researcher in VR, AR and AI, and the founder of Immersion Corporation, Microscribe 3D, Outland Research and Unanimous AI.
Data decision makers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
To read about advanced ideas and up-to-date information, best practices and the future of data and data technology, join DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers