GOING DEEP SPATIAL: Our Technical Definition of Experiential Technology

Why define experiential technology?

Over much of the past decade, extended reality technologies using the term “XR” have stood alone as a category that enabled pervasive display-based creation of virtual 3D experiences, as they were put into mass production in the form of AR.

As you are intimately aware, XR overlays virtualized content on the real world, displaying either on a flatscreen device or a head-mounted display - and VR, which creates a fully immersive view and the experience of a full alternate reality. Our event was founded on an enthusiasm for these technologies, and our commitment to them remains steadfast and unshakeable.

With that said…

The recent wave of generative AI technologies washing over the world and transforming our online experiences in new and unexpected ways has caused us as organizers to reflect on the true nature and essence of our status-quo passion and intention in working in the area of XR. 

What we found is that there is a broader category of technologies, including but not limited to XR and AI, which more properly captures the technological focus of what we wish to work with as both organizers, hackers and scientists - and we believe the future is in this area.

Experiential technology is an open-ended family of technologies which incorporate: 

(a) Sensory input/output interfaces. The system’s input interface affords the user expressiveness by using sensors to capture signals. The system’s output interfaces with the body directly in a way that allows “experiences" to be simulated. These may resemble other physical realities, or may constitute novel or physically impossible experiences. 

(b) Content engines. These encompass computational representations of some reality or realities which are used to drive the sensory interfaces to create specific experiences.

Most would agree that head-mounted displays such as VR/AR headsets certainly count as experiential technology. 

Let's consider variants of generative AI as experiential technologies. First, consider that plain old digital displays, like the ones we find on our phones and on our walls, are powerful sensory interfaces - which is evidenced by the strong emotional impact of movies. 

Images and videos created by generative AI are novel in both quality and quantity - making such a system a genuine content engine. In light of these observations, the combination of digital displays and AI-based image generators constitutes an experiential technology. 

If we consider language-based interfaces, words would be a sensory interface. Words have the power to evoke all of the senses in the imagination - in some ways that are inferior to digital displays, and in other ways that go far beyond them. The LLM functions as the content engine, and the combination of text-based input and output combined with an interactive LLM would therefore fit the definition of an experiential technology.

Those were two deliberately simple examples. What about processing chips supporting IoT applications? Or physiological sensors?

We have demonstrated the construction of five systems, all of which satisfy the definition of experiential technology. Therefore, VR/AR headsets, digital displays with generative AI, word-based interfaces with LLMs, chips supporting IoT applications, and HCI sensors are all valid examples of experiential technologies.



Next
Next

MIT Reality Hack: Promoting Inclusive Access to Tools for Creation and Making the World a Better Place Through Experiential Technologies