There’s no doubt that emerging technologies like emotion detection, affect recognition, neurotechnologies, and extended reality (XR) that fall under the broad umbrella of the metaverse are posing significant challenges to privacy. Each of these new tech categories intrudes deeper into our personal sphere, potentially wearing away the sanctity of our inner selves and personal lives.

These activities, when converted into digital data, bring to the fore traditional and unique data protection issues. This happens despite the evident flaws in existing laws dealing with personal data, which often necessitate the identifiability of individuals. While traditional legal systems usually consider privacy from an individual perspective, these advanced technologies pose threats not just to individuals, but to groups and society at large.

Extended reality technologies clearly highlight these privacy concerns that straddle both the personal and collective domains. By integrating virtual and real-world elements, XR technologies necessitate the gathering and utilization of biometric identifiers, real-time location tracking, and constant audio-video recording technologies. These technologies generate detailed live maps of spaces and places and keep track of ambient sounds.

From an individual’s viewpoint, XR devices collect details such as voice or vocal tone, iris movements, gaze, stride, other bodily movements, location info, device details, and more. This raises serious questions about the privacy and security of the data collected.

These practices also ignite fears about personal privacy and security, as these technologies can be used to track and surveil individuals. Beyond the individual using these technologies, say by donning an XR headset, they pose significant risks to nonusers and others who interact with that person in both the virtual and physical realms. For instance, always-on recording devices and cameras are likely to capture images, movements, voice, conversations, and other sounds of unwitting bystanders.

Coupled with sophisticated biometric identification systems, like facial or voice recognition, these technologies could potentially locate and identify individuals without their knowledge or consent, leaving them no chance to opt-out.

Currently, there are limited laws or regulations addressing these scenarios. As the Electronic Frontier Foundation warns, we might find ourselves in a “global panopticon society of constant surveillance in public or semi-public spaces.” XR technologies underscore the contextual and interpersonal nature of our privacy issues and the need for a collective approach in a postdigital world.

However, the issues related to metaversal technologies like XR go beyond what we traditionally perceive as privacy challenges. These technologies, by their very nature, are intended to modify or augment reality, making them potent tools for manipulation and discrimination. Depending on the reality individuals are exposed to, they could be influenced, manipulated, or coerced into decisions, actions, or activities contrary to their best interests, often without their awareness.

This phenomenon already exists in the digital media landscape, like algorithmic systems for personalization and behavioral targeting. But XR and similar technologies could intensify the so-called filter bubble effect. Furthermore, individuals sharing the same physical space might perceive different versions of “reality,” depending on their gender, race, socioeconomic status, or other sensitive attributes (or potentially based on their ability to afford the latest or best XR technologies).

In this manner, such technologies pose a direct threat to personal autonomy, human dignity, choice, consent, and self-determination – values that underpin privacy concerns and are fundamental to democratic societies.