The trailer for Zero Point, the very first movie produced specifically for the Oculus Rift, was revealed last week to not much fanfare. Unfortunately, another AAA studio went belly up around that same time, which ate up everyone’s attention.
It also doesn’t help that, alas, the trailer above doesn’t quite do the job. There’s another version that’s a bit more interactive, giving you a 180 degree viewing angle, but even then, it hardly suffices.
Two weeks ago, I got the chance to view a portion of Zero Point in the manner it was intended for; via Oculus’ VR headset. You know how just a few scant seconds of video game with the thing will make you go “Oh my God, the future of gaming is NOW”? I felt the same exact way as it pertains to movies. As well as theater.
The demo consistent of two portions; the first was the camera operator walking through the crowded expo hall of E3 last year, and was neat I guess. I’d later learn that it was using an early version of their set-up, hence why there wasn’t a perfect 360 degree view of the action. More like 350, with black bars to the right and left, where the two viewing angles meet.
Though the primary issue was how the cameraperson was walking, so I felt the same bit of disorientation when playing a game via the Rift (along with the slight pangs of anxiety that I might become motion sick, which didn’t happen, thank God). But yeah, I still find movement with the VR headset to be an awkward thing.
The second portion, meanwhile, had no camera movement. Because of this, along with how a more advanced set up that was employed, I had a full 360 view of the action. And it was simply amazing. This time I was in middle of a Marines training exercise in Afghanistan, involving a bomb that needed to be defused I believe.
The atmosphere was tense and confusing; soldiers and civilians all over the place, all yelling and screaming at each other. It was a challenge to comprehend who was coming and going, let alone figure out what I was witnessing. It honestly felt like I was transported into another world; this was seriously the stuff of Star Trek.
Granted, I couldn’t walk around, like you can in the holodeck. So I guess it’s one of those sci-fi movies/shows in which you’re just a silent observer to some calamity that’s underway. Still, the effect was mesmerizing, and just thinking about the possibilities is ten-fold.
Afterward I spoke with Danfung Dennis, both the producer/director of Zero Point, as well as the Founder/CEO of Condition One, the start up that’s behind the tech that makes it all happen. While Dennis wouldn’t go into exact details (it’s still a secret), I was told that a multitude of cameras are being employed, all shooting in 3D and at a minimum of 60 fps.
All those streams are then stitched together in a video engine that’s actually based on a game engine, which allows for the display of full resolution, high resolution video in full 360 degrees.
A bit about Dennis; he’s been covering the wars in Iraq and Afghanistan since 2006, initially as a photojournalist. In 2011 he produced a feature length documentary on the war in the latter region called Hell and Back, which was nominated for an Academy Award.
Dennis kept explaining to me his desire to convey “experiences”, which he simply wasn’t able to do via tradition still and moving pictures. And that’s where our conversation starts…
What exactly do you mean when you say the word “experiences”?
I want to bring people into the story and let them be there, to witness things firsthand. Instead of just shooting a frame and presenting it, I want to put people inside of that frame, completely. So I had been thinking of immersion for a while. We started developing for mobile and created these interactive videos, where you’d move the device and it would change the viewing angle, but that was just a stepping stone.
What’s the biggest challenge, technically speaking? I’d imagine filming from a fixed position is much easier than when cameras are moving.
Filming from a fixed position is much easier than while moving. Whenever you move the camera, all manner of technical considerations came up. Factors that can lead to motion sickness, or imperfect seams. So we’re trying to find a stabilization set up that will allow us shoot very smooth, stable shots while moving through space.
I must admit, when the E3 segment kicked in, and realized that things were moving, I began to worry if I’d get dizzy. But beyond that, I also know many who are excited by the Rift yet are worried about motion sickness.
Any type of movement can cause motion sickness. It’s a huge challenge and we’re really focused on making it as comfortable an experience as possible. But as displays get better, and as we understand how to shoot and produce these experiences, I believe solutions will come our way.
Though I do think the biggest hurdle, as it pertains to widespread consumer adoption, is the processing power that is going to be required. We’re displaying high res, stereo 60fps; those are the minimum for a comfortable experience. you’re going to need so much more of a convincing experience.
You believe it’s a technical issue? I figured it’s something that people are simply not yet used to yet. Like how, during the early days of movies, it was rumored that a theater full of people freaked out because they saw a shot of a train coming towards them, because they didn’t know how to process what they were seeing.
I think technical comes first. And that’s going to be a huge barrier that we’ll be focused upon in the very beginning. But, I think even larger than that is this new visual language that filmmakers have yet to develop and perfecting that.
Everything that’s applied to filmmaking in the past no longer applies to VR. We’re trying to develop a new visual language, invent the syntax and grammar of how you can tell a story effectively in this new medium.
Sticking with production for just a bit: moving the camera can cause problems, understood. When it comes to keeping the camera still, are there any other advantages? Maybe creatively speaking?
There’s something about movement that’s highly compelling in the Rift. Your peripheral vision is engaged when you’re moving, and that really gives the sensation that you’re actually somewhere else. That things are happening all around, and if you’re not paying attention, you’ll miss something important.
So I believe immersion is increased when you start moving the camera. We really want to get a point in which we have total freedom in how we’re shooting. We’ll start with more static scenes, those are the easier types of shots to capture. But to really create a dynamic environment that you believe you’re present in, you really need motion.
Well, the reason I ask is because I enjoyed the stationary camera segment the most. It was like being in the middle of a play, observing actors on a stage all around, giving uninterrupted performances. Whereas in film, it’s very staccato; a movie is ultimately driven by the director, and the editor, but not so much here? Actors seem to take greater precedence, and we are talking about a new type of movie here…
Have you seen Punchrunk’s Sleep No More? I think it’s the same leap. Traditionally when you went to the theater, you’d sit and watch a play upon stage. Whereas, with Sleep No More, you’re on stage and the whole story is happening around you, you’re a character, and they’re interacting with you.
And I think it’s the same leap here, if you want the traditional silver screen, where you sit in the audience, you can still get that. But do you want to step inside the silver screen? Be inside the movie?
What do traditional filmmakers think of what you’re doing?
There’s no question that anyone who tries the Rift knows that this is the future. They get it immediately. But, the more traditional filmmaker I think they… how would I word this.,. they may feel slightly threatened. In that, all the rules that they know of how to tell a story are gone.
We don’t know what the best way to use this technology is, at least not yet. We’re actively learning, researching, and experimenting. But there’s no frame anymore. And as a cinematographer, that’s the essence of what you’re doing. Framing information for the viewer.
Now instead, you’re delivering all this raw information to the allow the viewer to decided what the framing is. What, they’re looking at. And the same thing for editing. The cut… the most fundamental, basic technique to tell a story… it’s gone. It’s too abrupt. we’re finding that if you just cut from one scene to another, people will all of a sudden become disoriented and ask “Where am I now?”
Is that why, in the Afghanistan segment, there were these slow transitions? Instead of, as you say, hard cuts?
Yes. But we don’t think that’s a solution either. It’s kind of a stop gap measure; the cuts were too hard, so we’re trying some blending. We actually think we’re going to have to learn a lot from gaming, how narrative is conveyed in games, and possible merge the two mediums.
These new, immersive experiences may have to take something from both, to then build something that’s entirely new, from the ground up for VR.
So some directors might be a weary… Have you spoken to any who are interested in using the Rift as well?
Absolutely. There’s quite a few who have expressed their excitement by the possibilities. They want to dive in and see how they can use [the Rift] to tell a story and share experiences. But they also understand that things will be different.
If you’re set on making a traditional film, in the same way that, if you’re a radio broadcasters, you’re not going to move into television. This is a new medium, it’s not going to replace an existing one, but simply be an entirely new form, with new rules.
And it’s going to take a new generation of storytellers, who have backgrounds in film, in gaming, who will be able to bring all these traditional notions of story, of experience, and invent something totally new.
Back to traditional methods of storytelling; it would appear that editors might be the ones who are the most skeptical…
I think you’re going to have the spectrum of those who are interested in new experiences and building new experiences. We’ve shown this to several editors. and some have gone “Well, how are you going to edit the story?” Then there are others we’ve shown it to and they’ve gone “Wow, HOW DO YOU EDIT A STORY!?!” and they really want to get into it, and see what they can do with it.
We’re actually talking to someone in Hollywood that edits horror movies and he thinks he can make the most terrifying horror movie ever made with the Rift.
You mention the need for learning from games. Are there any examples that stick out?
I’ve been looking at Gone Home, it’s an excellent example of self exploration and discovering the story as you move through these spaces. I think there’s a lot to learn from that game, and if that was a Rift experience, you’re deciding what spaces to go into and explore, the story starts to emerge. And it would work really well in VR. Maybe not an entirely linear narrative, but one in which you’re piecing together the narrative.
Back to the audience for just a bit; one defining experience is that it’s often a shared experience. And being in a theater full of people is totally different than being isolated, because you have something covering your eyes and ears…
We explore in our film these… two tracks of utopian experiences. This notion of virtual reality and how we can collaborate, how we can share in new ways. But there’s also this dystopian side, where we just isolate ourselves and just inhabit theses words that are most attractive to ourselves, and we complete cut ourselves off.
So in the film we actually address this head on, by showing what that future might look like. But yeah, definitely thinking about that.
Thus far we’ve only talked about narrative; have you spoken with other documentarian filmmakers as well?
Yes, we’re super excited about the potential with IMAX-type experience. Anything in which you can transport the viewer to a world that they would otherwise never go, like deep sea diving exploration, space exploration, arctic expeditions, rain forests, etc.
I mean these are the classic IMAX documentaries that would work so well in this medium. You can bring the IMAX theater into your living room for just a couple hundred dollars. It’s going to be IMAX, but 360.
Again, in reference to the Afghanistan scene, how feasible could it be in a volatile scenario?
Working in Afghanistan, close to the line of fire, it was very important to have a compact system that I could carry around easily. So you’re probably not going to cover front line combat with this system.
But, our camera system is the faction of the size of an IMAX camera, so we’re going to be able to take this anywhere an IMAX camera has been: high top Everest, deep underwater… and it’s all digital, so we’re not dealing with any film.
So it’s going to require more of a crew than what I’ve been used to as a single operator, But that just comes with the territory, when you’re shooting a terabyte of data every second.
Can you share some finer technical data?
Again, the camera system is fully digital. They’re capturing onto SSD drives and each can hold 24 minutes of footage. And after that, we dump everything onto RAID arrays. And then we have a massive number of hard drives.
The tough part is data; we’re just capturing such an immense amount of data that the whole workflow needs to be built to handle that much footage coming through this camera system.
This software you’re creating; is it an all in one platform, or does is it used in conjunction with previously existing software solutions, like Final Cut or Premiere?
We try to use and many tools that are already available. But there are custom tools we are developing in-house that allows us to do what we’re trying to achieve.
What are you looking for at this point? To get people excited, and educated?
We’re looking to get this [Zero Point] out to developers, to anyone who has a develop Rift. That’s later this spring. And we’re hoping to get feedback from that developer community so that when the consumer version of the Rift comes out, we can release something that takes all the lessons that we learned and does something more significant.
Are there other filmmakers at the moment using your tech?
Well right now we’re handling all filmmaking internally, due to the workflow, but we are talking with IMAX production companies. Anyone that is interested in creating these immersive video experiences, we’d be interested in talking with them, but because of the cost and complexity, it’s probably going to be larger studios to being with.
And you’re going to be at GDC?
Yes, we hope that this is the bridge between the gamer community and that non-gamer community, to create a compelling experience but without the need to know nine buttons on a gamepad.
And this is just the beginning. these are just early signposts of where things can go. I think there’s the potential for a fundamental shift in how we interact with our machines. Soon we may no longer tied to a keyboard and mouse. We’re going to be interfacing directly with these environments, in the same way that we naturally interface with each other. And the implications goes far beyond gaming or entertainment or even communication.
It will take a while before the technology is perfected; 5, 10, maybe 25 years. But when it happens, it’ll be indistinguishable from reality. and we have no idea what that means for society. but the hope is that we should be able to share one’s personal consciousness and experiences. So we can better understand and empathize with each other.
Last question: do you consider what you’re doing machinima?
You know… I’m not sure.