In 2014 I got my first dose of virtual reality courtesy of William McMaster, a local documentary filmmaker based in Toronto. The footage he showed me were just a random array of clips that he had captured while living in Tokyo but the experience of watching it in the Oculus Rift DK1 was utterly exhilarating. After I took off the headset I couldn’t stop the wheels in my head thinking about what I wanted to do in the medium. My only issue was that I wasn’t sure how I would be able to tell a narrative story in 360 video. In the end, I decided to take my own approach and executed this with my film, I Am You. With that film I was able to come to grips with a lot of these feelings I was having about the limitations of storytelling in the medium.

After I completed and released I Am You, I started thinking about my next project. One day I came across the Kickstarter for “Blackout”, a film that was using a technology called Depthkit to capture the actors and place them in a 3D environment. I became really intrigued by this technique by using this seemingly easy and relatively affordable software combined with off the shelf hardware.

Serendipitously, I came across Ben Unsworth of Globacore who had backed the Kickstarter campaign and therefore had access to both the Depthkit software and the custom hardware mount for the Microsoft Kinect sensor. He agreed to lend it to me in exchange for knowledge on how to use it. Happily, I jumped right in the deep end. Despite reading a lot of the documentation and step by step instruction on their site, I had a lot of trouble going through the calibration process of the Kinect, my camera, and the software. Luckily, Depthkit had a Slack channel and were available to help. This was a god send as I wouldn’t have been able to get through the process without them as the learning curve was quite steep. That said, it still took quite a few tries before I was able to get it right and I can only attribute my eventual success to a tedious trial and error process that I just didn’t give up on.

Fast forward a couple weeks and I met filmmaker Pierre Friquet who was about to shoot his latest VR film, Patterns, and was interested in using Depthkit to capture the actors. I immediately put myself forward to help as it was an opportunity to experience the entire workflow from start to finish, and I believe that there is no better a way to learn things when you are on a project with a deadline. Luckily, the shooting portion went off without a hitch but the post of the footage became an issue because we had shot everything on a green screen and I had to figure out the workflow of processing it through the Depthkit software. Again, after lots of trial and with the help of their slack channel we found the answers we were looking for.

Unfortunately, I never saw the final version of Patterns but I did see the takes I worked on in the HTC Vive headset. What I can tell you is that it left me with this confidence that I could see how I could tell stories in Virtual Reality and since August 2016 I have made two more VR projects that have involved Depthkit and I have no plans on slowing down any time soon. I have fallen in love with this idea that I can get the exact performances of my actors into a virtual space, so quickly and easily. I also see endless possibilities of how I can make things and not feel handcuffed by the technology, which is incredibly important to the productivity of an artist.

Based on the conversations I’ve had with Depthkit, it feels like they are very committed to improving the product and sorting out some of the more technically challenging aspects to using it. I’ve also heard rumors that they are going to start supporting different sensors as the Kinect will eventually disappear based on Microsoft’s lackluster support of the product. That said, the future is very bright and I’m more than happy to go along for the ride even if it stays bumpy for a little while longer.

Comment