Digital Storytelling: Creating Stories For The Metaverse

We’ve looked at three events I consider ancestors to our modern AR and VR efforts, we’ve looked at some of the technologies, both hardware and software, we can use to create AR and VR experiences.

Using beacons is easy. Telling stories with beacons, or any sort of AR and VR, gets more complicated. I’ve chosen to play with interactive story telling as a way to tell stories in the metaverse. Using tools like these allows me to build both the main stories and the branches for it… how we translate the stories is the real challenge.

The more I think about it the more I come back to the tools of the story teller and, specifically, the tools of the LARP story teller. Whether it’s Nordic LARP or a modern set of rules like By Night Studio’s Vampire: The Masquerade and Werewolf: The Apocalypse they provide a complete set of mechanics for you to use as-is or adapt to a virtual environment… We’ll explore how the stories change as we move further into the VR field.

Example of costuming for a Vampire The Masquerade LARP

Working with beacons presents a straightforward challenge. How do we segment the story we want to tell while, at the same time, preserving the reader/participant’s interest through a potentially long sequence of beacons.

It’s also important to remember that beacons only provide an access point. In and of themselves they will not give content to the user. This prompts the question: why use them?

Beacons are the perfect advertisement and storytelling system. They make things we wouldn’t normally associate with the web or the Internet into entry points to interacting with online content… as simple as a vending machine transaction or as complex as your imagination can make them. We can also have as many beacons as we want in a building or groups of buildings.

Telling stories in AR is a little more complicated. Wherever we choose to tell the story we would seed the space with AR objects to help us tell stories. That introduces additional levels of complexity to our story telling: unless we control where a story is told we have no way to know what, if any, objects the actor/reader will interact with and in what order they will do so. Each of the elements in our story should stand on its own and help bring other important elements into the reader’s view.

Microsoft provides a good set of guidelines for users on how to create shared holographic experiences that makes a good starting point

We also need to be aware of how the devices themselves will affect the way people react and the ways and places where we tell the stories.

Full VR is the most complex medium for telling stories but it’s also the most rewarding. We can’t take advantage of existing environments like we can with AR stories but we have to build the rooms, building and elements that we want our story to use. We also have to create bots and other interactions for the users.

Single versus Multi User Stories

AR stories also need some consideration regarding individual versus community (multiplayer) styles.

Creating single reader stories is as simple as placing the content in the environment and providing ways for other devices to access the anchored elements and stories. Because we’re placing the stories in physical places we also have to account for the usage of the place where we’re telling the stories. A park is different than a coffee shop or a bar as a place to tell a story.

When/if we decide to build a multi-user experiences we need to start considering how will people appear on each other’s experiences like, if they will have the option of doing so and how will interactions change individual stories. Furthermore we need to consider if we’ll need a server to handle player interactions and how will these interactions affect the individual stories.

How does the narrative change when you’re fully immersed in the story?

Writing a story for print or even web publication is one thing. When we put ourselves in the story by creating an avatar that will represent the reader and interact with the content of the world is something different. We need to consider what consequences will the body have in the virtual world.

In a digital environment we can engage more than just your brain and your hands. Depending on how the environment is configured we may have a full technologically mediated presence exercise

The International Society of Presence Research defined presence as:

… [A] psychological state or subjective perception in which even though part or all of an individual’s current experience is generated by and/or filtered through human-made technology, part or all of the individual’s perception fails to accurately acknowledge the role of the technology in the experience. Except in the most extreme cases, the individual can indicate correctly that s/he is using the technology, but at some level and to some degree, her/his perceptions overlook that knowledge and objects, events, entities, and environments are perceived as if the technology was not involved in the experience. Experience is defined as a person’s observation of and/or interaction with objects, entities, and/or events in her/his environment; perception, the result of perceiving, is defined as a meaningful interpretation of experience.

International Society for Presence Research. (2000). The Concept of Presence: Explication Statement. Retrieved April 30, 2017 from https://ispr.info/

How much do we want to push the presence that we create in our stories? How much do we want to “fool” our users into thinking the objects and interactions we provide are real, or at least, that they are real enough to warrant suspension of disbelief for as long as they are engaged with our content?

How do we represent the character’s physical interactions? It would be tempting to attempt to make a full body avatar of the user and provide full body tracking to translate the player’s movement into the Avatar’s actions and how he/she interacts with the content we create for them.

We also need to be mindful to represent the user as close as he/she is. Unless we can create avatars that directly represent the player or provide generic enough models of the user’s presence in the world we may be better off with providing less presence or doing so in such a way where only parts of the body are visible.


How far do we want to push the technology(ies)? It is becoming possible to do full body tracking and providing haptic feedback in Virtual environments. Before we jump too deep, let’s define what we mean by Haptic Feedback, Haptics or kinesthetic communications:

Haptic or kinesthetic communication recreates the sense of touch by applying forces, vibrations, or motions to the user.[1] This mechanical stimulation can be used to assist in the creation of virtual objects in a computer simulation, to control such virtual objects, and to enhance the remote control of machines and devices (telerobotics).

Wikipedia

Technologies like the Dexta Robotic’s Dexmo Haptic Exoskeleton and Cloud Gate Studio’s custom full-body tracking system for VR experiences would allow us to use more than our hands and heads to build experiences moving the problem back to creating a custom avatar for the user.



Scale considerations

Room-scale (sometimes written without the dash) is a design paradigm for virtual reality (VR) experiences which allows users to freely walk around a play area, with their real-life motion reflected in the VR environment. Using 360 degree tracking equipment such as infrared sensors, the VR system monitors the user’s movement in all directions, and translates this into the virtual world in real-time. This allows the player to perform tasks, such as walking across a room and picking up a key from a table, using natural movements. In contrast, a stationary VR experience might have the player navigate across the room using a joystick or other input device.

Wikipedia

One of the decisions to make is the scale we want to tell our story in. Until fairly recently we could only create virtual environments that gave us synthetic spaces that we moved though using keyboard, controllers or VR/AR specific controllers and devices.

New technologies allow for larger room-scale models where the user places sensors on different locations in the space and those provide tracking capabilities and the ability for the player to actually move their physical bodies the same way the bodies would move in the virtual world, increasing the level of presence we can provide.

The downside is that we need a physical space that is large enough to match the space we provide our users in the virtual world. Still it may present interesting avenues for storytelling.

Navigation and travel through the Metaverse

An issue related to scale is navigation. Whether the world is room-scale or a synthetic world we need to consider how our users will navigate around our world. Whether it’s a synthetic world that is no larger than a house or a large expanse of terrain where the user must either run, find a means of transportation (car or horse) or a teleportation device.

If the space is small or if we don’t have large spaces available then providing keyboard or controller navigation is the only way, particularly when working with tethered devices like the Rift.

Bots and other clues for the virtual tourist

Creating interaction in augmented or virtual spaces is another challenge worth considering. Do we want all interaction to be between players and objects or do we want to provide avatars and bots with particular interactions canned to respond?

I’m not implying that using objects to drive the story forward is necessarily bad but it makes the world a lonely place.

An idea worth exploring is whether we can use speech recognition to capture keywords and have the both react based on a pre-determined set of words to trigger responses that will move the story forward. Would something like the web speech API work in situations like this where the content is not served through a browser? Are there similar ways to work with this kind of technology in VR / AR space?

Keeping the world coherent

As we talked about when we discussed Habitat we need to keep the world coherent with itself. We must keep things consistent with the story we tell in the world we choose to tell them in.

This also means that we need to make sure that what we do throughout our story remains consistent, that whatever mechanics we choose to implement we do them the same way every time we have to use it.

Avoiding Sensory Overload

It’s easy to go overboard and provide everything for the user and build a large world for the user(s) to interact with. We can now create large, room scale VR experiences where we place sensors in multiple locations within a space and allow the user to move around the simulated space.

AR devices do something similar by allowing you to mix the real world with virtual objects so we need to be careful with the number and function of the virtual objects and their interaction with the real world.