One step into the New Frontier exhibition at 2015 Sundance Film Festival pulls viewers into a long-awaited future.
The exhibition space is populated with curious heads, geared up with virtual reality goggles, exploring their surroundings with childlike enthusiasm. The present has finally caught up with the wearable technologies depicted in “Back to the Future II,” which eerily takes place in 2015. We bring our decades-long fantasies, speculations and training to the actual experience, yet we are newborns in this virtual reality.
My first virtual reality experience at this year’s festival was a non-fiction piece by Montreal-based Felix & Paul Studios. Influenced by observational documentary tradition and ethnographic film, “Herders” invites the viewer to be a guest in a community of Mongolian yak herders. As the first scene fades in from black, I found myself sitting on a rock in a vast valley, in front of a man playing a jaw harp by himself (or to me). This piece serves as a warm up to more immersive sequences that follow — Felix & Paul Studios calls this period of adjustment “landing.”
In the next scene, I am in a different plateau, surrounded by yaks and their herders. A rustling sound draws my attention to the right side as I see a yak rubbing its hulking body against a thick-leaved bush. I keep staring at the anxious gestures of the itchy beast until the footsteps of a horse steals my gaze away. I turn my head 180 degrees and see one of the herders on a horse, heading home as the scene fades to black.
Popular on IndieWire
Next, I open my virtual eyes inside a yurt. A family sits around a stove whose chimney pierces through the oculus of the yurt, consuming the food with distinct slurping sounds. I feel like a shy guest who can not speak the language or participate in the ritual of eating, and am making myself as invisible as possible. The family comes to my aid by not making any eye contact. In the next scene, I find myself on the route of a horse herd. As the swarm runs past me, I feel a rush of adrenaline. I am surprised by how little control my mind has over my body. Finally, a sound bridge carries me over to the last scene: a meditative tambourine performance which completes the circle.
Live-action virtual reality experiences such as “Herders” are developed by 360-degree 3D (stereoscopic) video technologies. They can be viewed in virtual reality headsets such as Samsung Gear VR or Oculus Rift.
There is no standard technology or off-the-shelf gear for capturing 360 degree 3D video yet. The tools are being broken and remade as we speak, but the basic principles are right in front of us.
Converting a field of 360 degrees in video requires a camera with multiple sensors or a setup of multiple cameras. Such layout leaves limited or no room for the production crew, since there is no “behind the camera” area that can hide those who do not belong to the recorded realm. As a result, the existing dynamics between the actor/subject and the crew shift. While the actors are left alone to perform in 360 fictional worlds, the absence of the filmmaker might make it easier to capture the verite moments found in observational documentaries.
The footage captured by various angles is stitched together and rendered either algorithmically or in post-production. In order to build a sense of depth, each angle is also recorded stereoscopically or stereoscopy is calculated algorithmically. Perhaps the most crucial part of 360 3D videos is the accompaniment by 360 degree binaural audio, which is usually recorded in the field.
While experiencing the piece in a virtual reality headset, 360 audio follows the position of the head, always matching the direction of the sound with the position of the sound source in relation to the viewer. The most urgent shortcoming of the technology is the lack of positional tracking. Currently all live-action virtual reality pieces are experienced statically; it is possible to rotate 360 degrees around a single point but if you move forward, you would not move into the film. Following positional tracking are the usual suspects of technology: higher frame rate, higher resolution and higher update rate.
Aside from its technical specifications, what is live-action virtual reality really about? According to Felix & Paul Studios, live action in virtual reality harvests the feeling of presence, which is not consumed cognitively but rather in a sensual fashion. The medium attempts to immerse the viewer within a scene, making him or her a part of the virtual environment, a character in the (non)narrative. The scene is set up with the spectator in mind and this design incorporates both visual and aural spaces.
Felix & Paul Studios’ contemplative pieces aim to erase the sense of visual manipulation — the main building block of cinema known as “the cut” — and leave the audience with long takes of life, similar to the actualités of the early Lumiere brothers short films. However, successful virtual reality works employ a new editing tool: manipulation of the viewer’s gaze through positional audio.
As a first-timer in virtual reality, I certainly experienced the intimate presence of the Mongolian family during their meal and felt the virtual current created by the herd passing by me. However, it is important to bear in mind something that Mark Bolas, a professor at USC whose thesis project from 25 years ago was on defining virtual reality, said at a panel during this year’s festival: “Everything we are doing now is a mistake.”
As newborns in virtual reality, it might be easy to feel mesmerized by everything about this world. We have yet to see its true potential when the novelty wears off and the medium establishes its language.
Beyza Boyacioglu is a documentary filmmaker and a research assistant at MIT’s Open Documentary Lab.