Couldn’t attend Transform 2022? Check out all the top sessions in our on-demand library now! Look here.
In the coming years, consumers will spend a significant part of their lives in virtual and augmented worlds. This migration to the metaverse could be a magical transformation, making it mean to be human. Or it could be a highly oppressive twist that gives corporations unprecedented control over humanity.
I do not take this warning lightly.
I have been a champion of virtual and augmented reality for over 30 years. I started out as a researcher at Stanford, NASA and the United States Air Force and founded a number of VR and AR companies. have survived multiple hype cyclesI believe we’re finally there – the metaverse will happen and will have a significant impact on society over the next five years. Unfortunately, the lack of legal protection worries me greatly.
That’s because metaverse providers have unprecedented power to profile and influence their users. While consumers are aware that social media platforms track where they click and who their friends are, metaverse platforms (virtual and augmented) will go much deeper capabilities, monitoring where users go, what they are doing, who they are with, what they are looking at and even how long their gaze lingers. Platforms can also track the user’s posture, gait, facial expressions, inflections and vital signs.
Invasive monitoring is a privacy concernbut the dangers are heightened when we consider that targeted advertising in the metaverse will move from flat media to immersive experiences that will soon be indistinguishable from authentic encounters.
For these reasons, it is important that policymakers consider the extreme power that metaverse platforms can wield over society and work towards guaranteeing a set of fundamental ‘immersive rights’. Many safeguards are needed, but as a starting point I propose the following three basic protections:
1. The right to experience authenticity
Promotional content is everywhere in the physical and digital world, but most adults can easily recognize ads. This allows individuals to view the material in context — as paid posts — and bring a healthy skepticism when considering the information. In the metaverse, advertisers could undermine our ability to contextualize messages by subtly altering the world around us and injecting targeted promotional experiences indistinguishable from authentic encounters.
For example, imagine you are walking down the street in a virtual or enlarged world. You see a parked car that you have never seen before. Passing by, you hear the owner tell a friend how much they love the car, an idea that subtly influences your thinking, consciously or unconsciously. What you don’t realize is that the encounter was completely promotional, placed there so that you would see the car and hear the interaction. It was also targeted – only you saw the exchange, chosen based on your profile and customized for maximum impact, from the color of the car to the gender, voice and clothing of the virtual spokespersons used.
While this kind of covert advertising may seem benign and only influence opinions about a new car, the same tools and techniques can be used to spread political propaganda, misinformation and outright lies. To protect consumers, immersive tactics such as virtual product placements and virtual spokespersons must be regulated.
Regulations should at least protect the basic right to authentic immersive experiences. This can be achieved by requiring promotional artifacts and promotional people visually and audibly distinguish in an overt manner so that users can perceive them in context. This would protect consumers from seeing promotionally altered experiences as authentic.
2. The Right to Emotional Privacy
We humans have developed the ability to express emotions on our faces and in our voices, posture and gestures. It is a basic form of communication that complements verbal language. Recently, machine learning has enabled software to identify human emotions in real time from faces, voices and posture, as well as vital signs such as respiratory rate, heart rate and blood pressure.While this allows computers to engage in nonverbal language with humans, it can easily cross the line into predatory invasions of privacy.
That’s because computers can detect emotions from signals that are undetectable to humans. For example, a human observer cannot easily detect heart rate, respiratory rate and blood pressure, meaning those signals can reveal emotions that the observed person had no intention of conveying. Computers can also detect”micro-expressionson faces, expressions that are too short or too subtle for people to perceive, once again revealing emotions that the observed had not intended. Computers can even detect subtle emotions blood flow patterns in human faces that humans can’t see, again revealing emotions that weren’t meant to be expressed.
At the very least, consumers should have the right not to be emotionally judged at levels beyond human capabilities. This means that vital signs and micro-expressions should not be used. In addition, regulators should consider bans on emotional analysis for: promotional purposes. Personally, I don’t want to be the target of an AI-powered interlocutor who adjusts his promotional tactics based on emotions determined by my blood pressure and breathing rate, both of which can now be followed by consumer-level technologies.
3. The right to conduct privacy
In both virtual and augmented worlds, tracking of location, posture, gait and line of sight is necessary to simulate immersive experiences. While this is comprehensive information, the data is only needed in real time. It is not necessary to store this information for an extended period of time. This is important because stored behavioral data can be used to detailed behavioral profiles that document the daily actions of users in extreme granularity.
With machine learning, this data can be used to predict how individuals will act and respond in a wide variety of circumstances during their daily lives. And because platforms have the ability to change environments for persuasive purposes, predictive algorithms can be used by paying sponsors to preemptively manipulate user behavior.
For these reasons, policymakers should consider banning the storage of immersive data over time to prevent platforms from generating behavioral profiles. In addition, metaverse platforms should not be allowed to correlate emotional data with behavioral data as this would allow them to provide promotionally modified experiences that don’t just affect what users do. to do in immersive worlds, but skillfully manipulate how they to feel while you do it.
Immersive rights are necessary and urgent
The metaverse is coming. Although many of the effects will be positive, we need to protect consumers from the dangers of fundamental immersive rights. Policy makers should consider guaranteeing basic rights in immersive worlds. Everyone should at least have the right to rely on the authenticity of their experiences without worrying about third parties changing their environment promotionally without their knowledge and consent. Without such basic regulation, the metaverse may not be a safe or trusted place for anyone.
Whether you look forward to the metaverse or not, it could be the most significant change in how society handles information since the invention of the Internet. We can’t wait for the industry to mature to put in crash barriers. Waiting too long can make it impossible to undo the problems, as they will be built into the core businesses of major platforms.
For those interested in a secure metaverse, let me point you to an international community effort in December 2022 called Metaverse Safety Week. I sincerely hope that this becomes an annual tradition and that people around the world focus on making our compelling future safe and magical.
Louis Rosenberg, PhD is an early pioneer of virtual and augmented reality. His work began more than 30 years ago in Stanford and NASA labs. In 1992, he developed the first immersive augmented reality system at Air Force Research Laboratory. In 1993, he founded the early VR company Immersion Corporation (public on Nasdaq). In 2004, he founded the early AR company Outland Research. He received his PhD from Stanford University, received more than 300 patents for VR, AR and AI technologies, and was a professor at California State University.
Welcome to the VentureBeat Community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
If you want to read about the very latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers