VR upgrades in the Cube

While the blog has been quiet recently, ICAT is far from it.  I haven’t posted recently because I don’t have a new flashy video to show off yet, but we’ve made some excellent internal developments.

Foremost in my mind at the moment is our integration of the Oculus CV1 into the Cube motion capture system.  In the past, we’d used the DK2 as our go-to headset.  It was connected by HDMI and USB to a laptop that someone would have to carry around behind whomever had the headset on.  The DK2 itself was OK, but the resolution wasn’t fantastic.  There was also a decent amount of shakiness to the perspective coming from the mocap system.

Oh, how times have changed.  With the CV1, we can use the IMU’s rotational data for the VR perspective.  Not only does this eliminate the shakiness, but it’s super low-latency.  But in order to enable users to walk freely thru the Cube, we’re still using the position data from the motion capture system, and disregarding position data from the Oculus infrared sensor (it still has to be plugged in or the Oculus app throws a fit, but it doesn’t actually serve a purpose).  The combination of these two works *brilliantly*, resulting in by far the best VR experience we’ve ever had in the Cube.

But wait, there’s more.  Right now, we have this running off a laptop with a dedicated graphics card, which is a problem.  The laptop’s battery can’t supply enough juice to the GPU, so when the laptop is unplugged it automatically lowers the GPU’s memory clock which dramatically hurts VR performance.  Enter the MSI VR One laptop/backpack/jetpack(!)(??)/new-kind-of-computer-form-factor-that-doesn’t-really-have-a-name-yet.  This thing is designed for tetherless VR.  You put it on like a backpack, you hook all the VR stuff up to it, tie up all the cords, and you’re free to walk around the room with nary a concern of tripping on cords or having someone else hold the laptop (new meme idea: Hold My Laptop… while I do some VR).  Caveat: we haven’t actually tried this thing yet.  It JUST released, and we have one on the way.  So, it’s possible that it won’t live up to expectations, but I’m hopeful.  If this works, we will get several more, and have true, tetherless, social VR in the Cube for the first time ever.

P.S. I’m still doing a lot of work on the physics VR simulation, and it’s looking great.  I’ve built a diegetic interface.  Once we get this backpack, I’ll take some video of a user exploring the simulation, using the interface, and post that here.

Perform Studio

The Perform Studio has undergone exciting changes over the summer and into this fall semester.

First, Tanner overhauled the booth.  Instead of just being a closet crammed full with a giant rack for the patch bays, he wall-mounted them to make enough room for it to be used as an actual mixing booth.  Yay!

Perform Studio booth

Second, the OptiTrack Motive:Body software was updated from 1.6 to 1.9, with excellent new features that improve nearly every aspect of the software.  Additionally, we’ve ordered some extra capture-suit accessories.

Third, the Perform Studio is now being used as the CHCI lab.  Doug Bowman and his cohort of graduate students can use the space to run experiments in virtual reality, using the motion capture system present.  We’re very excited to have them doing their research in ICAT’s facilities.  If you’d like to use the space, though fear not!  You can still reserve it with 72 hours notice, and it will be cleared out and ready.  If you would like to reserve the space, please contact Run Yu, who is handling the reservations for Perform Studio this year.  runyu at vt dot edu

Doug Bowman's 3D Interfaces grad students in the Perform Studio, 9/26/2016
Doug Bowman’s 3D Interfaces grad students in the Perform Studio, 9/26/2016

Cyclorama Update

We’ve made significant progress recently figuring out how to render video for the Cyclorama, getting Unity working with it, and streaming video in.

Rather than expound on what we’ve learned here, I will refer you to the two new documents I posted to the ICATDrive.  One is one the pipeline for rendering from Maya to the Cyclorama, and the other describes the details of the various types of input for the Cyclorama (e.g. video, image sequences, and streamed video).  If you’re interested in using the Cyclorama, please refer to these documents for information on how to get started on getting your content ready to be played!

We’ve also been testing out a lot of content and a LOT of people have been through to look at it.  Yesterday we brought through two classes from Graphic Design (students from the classes of Katie Meaney and Patrick Finley) with the intention that they will spend some class time making projects for the Cyclorama.  They were enthusiastic about the possibilities, and we’re excited to see what they might come up with!

Graphic Design Students Visit Cyclorama 8/31/2016
Graphic Design Students Visit Cyclorama 8/31/2016
Graphic Design Students Visit Cyclorama 8/31/2016
Graphic Design Students Visit Cyclorama 8/31/2016

Cyclorama

The Cube now has another amazing new feature: the Cyclorama!  We finished installing it last week.  The Cyclorama is a massive cylindrical projection screen for immersive experiences.  At roughly 40′ in diameter and 16′ tall, it can accommodate a maximum of 60 people and allows the projection to fill your entire field of vision from any where you stand.  Check out this timelapse video of the construction, which took almost 4 days:

A huge thanks goes to the Moss Arts Center production crew, who did all the hard work of actually building the thing!

It is operational and we’re starting to get the hang of how to use it.  We’ve had to readjust the motion capture camera arrangement so that we can capture both inside and outside.  I’ll put up some more posts in the coming days with details about the new motion capture setup, how to use it, etc.

High Performance Wireless VR

We now have a working pipeline for wireless VR that can simultaneously be high performance (computationally and/or graphically).  Here is a flow chart:

 

High Performance VR in Cube flow chart

Despite how many different parts there are, it works reliably.

Here are the next steps to improving it:

  1. Replace Garfield with the render cluster in the Andrews Information Systems Building.
  2. Replace NVIDIA Gamestream and the Moonlight app with our in-house encoder / streamer / decoder solution that is currently in the works.
  3. Allow for simulation software besides Unity
  4. Replace the current VR headset with something nicer (already purchased, not yet tried).

ICAT Drive

ICAT Drive is now live –

https://drive.google.com/folderview?id=0B789Y1umpu51SjBkY0xBSlRldkU&usp=sharing

ICAT Drive is a central repository for all things ICAT.  Manuals, assets, useful code, etc.  The above link is open to anyone.  Most of the documents and assets in that folder are open to the public.  There are a few things, like the passwords document, that require special permissions which you can acquire by contacting ICAT.

We would really like the ICAT Drive to be an important resource for all projects that use the ICAT facilities.  The guides, APIs, and assets within should allow both faculty and students to jump start their projects without having to start from square one.

So far, I have published a series of manuals on how to use the motion capture systems in the Cube and the Perform Studio, and how to interface those motion capture systems with Unity.  That comes with the necessary assets I created to enable the interface.  There is also a guide on how to import point clouds into Unity (along with another unitypackage with scripts to enable this utility).  And holding it all together is a guide with an example project that shows how to put all the other guides together.  The guides are organized modularly, making it easy to refer to and edit any given operation.

There are also assets.  Right now there is a model of the Cube and some neutral character models for use in any project.  This will be greatly expanded on over the next few months.

Motion Capture Streaming Receiver

I just finished integrating the Unity interfaces for our two motion capture systems!  Now all you have to do is add the prefab to your project and set the drop down to which system you want to use!

MotionCaptureStreamingReceiver

In our primary facility, the Cube, we use a Qualisys motion capture system.  Qualisys Track Manager streams (over UDP, in other words over WiFi) rigid body position and rotation data to clients, which control virtual reality headsets and what not.  The clients that I take care of run Unity.  When I started working here, we had an interface script which received this data and allowed the Unity user to plug in which game object should be moved based on this data.  It worked great, it was a well written script.

Then, over in the Perform studio, which is our “mini-Cube” prototyping space, we have an OptiTrack motion capture system.  OptiTrack advertises its NatNet SDK, which is a middle man that receives the OptiTrack data, then reformats it in order to send out to multiple other clients.  It increases latency and means that an extra program needs to run (the NatNet SDK).

Piggy-backing on the work of another OptiTrack user, I wrote a script that receives the data directly from the OptiTrack stream.  I then wrote a wrapper that instantiates both the Qualisys Track Manager interface as well as the OptiTrack Motive Body interface, and provides a handy context-sensitive interface for setting the parameters for each of the systems.

I had to learn to create a custom object inspector using the UnityEditor library.  I didn’t do anything too fancy, but it was fun to learn!