The Cube now has another amazing new feature: the Cyclorama!  We finished installing it last week.  The Cyclorama is a massive cylindrical projection screen for immersive experiences.  At roughly 40′ in diameter and 16′ tall, it can accommodate a maximum of 60 people and allows the projection to fill your entire field of vision from any where you stand.  Check out this timelapse video of the construction, which took almost 4 days:

A huge thanks goes to the Moss Arts Center production crew, who did all the hard work of actually building the thing!

It is operational and we’re starting to get the hang of how to use it.  We’ve had to readjust the motion capture camera arrangement so that we can capture both inside and outside.  I’ll put up some more posts in the coming days with details about the new motion capture setup, how to use it, etc.

Roanoke science museum planetarium

Yesterday, we paid our second visit to the Science Museum of Western Virginia’s museum planetarium facility in Roanoke.  We (ICAT, Advanced Research Computing, Architecture, Music) are working with museum staff and community volunteers to build a vision for the future use of the amazing space, which is currently under used because of outdated technology.

On this visit, in addition to our minds and some measuring tapes, we brought four projectors (from ARC’s old VisCube setup), a computer, and a Max patch which I rigged up the day before to test stitching and blending a single video across all four projectors.  It worked great!  There are still some issues to be worked out in terms of perfecting the blending and getting 4k video going smoothly, but the four projectors were able to cover almost the entire screen!  Here’s an idea of what it looked like (an image doesn’t really do it justice because the projection is so big, the camera lens can’t capture it all).  With chickens.  Don’t ask.

Better with chicken.
Better with chicken.

Belle II particle physics simulation update

We have some exciting updates from the particle physics simulation we’ve been working on for the past month!

First, we now have Unity scripts that read the particle data from a CSV file, sort it correctly, and use it to spawn and move particles.  It’s a very important first step – in fact, a lot of everything from this point on is frills.  Although there’s lots of frills.  But this is the *meat* of the project – taking particle data and visualizing it.  There’s a (slightly confusing to watch, but proof of product) video below:

ALSO, Jesse Barber (Physics sophomore) and Kaelum Hasler (high school student worker) have been working with me to get the VRML model of the Belle II organized.  We had to combine a lot of objects in order to get the object count down to something manageable (it was at something like 300k to begin with), but we also had to CUT a lot of the cylinders into four parts, in order to solve a transparency depth-sorting issue in Unity.  They spent almost all of last week working on that, and we now have a model in Unity!  This model is still incomplete because the VRML export that we’re working with doesn’t quite have the full model, but we’re almost there, and we can really see what it’s going to look like now when it’s all done!  See below for another video of that.

High Performance Wireless VR

We now have a working pipeline for wireless VR that can simultaneously be high performance (computationally and/or graphically).  Here is a flow chart:


High Performance VR in Cube flow chart

Despite how many different parts there are, it works reliably.

Here are the next steps to improving it:

  1. Replace Garfield with the render cluster in the Andrews Information Systems Building.
  2. Replace NVIDIA Gamestream and the Moonlight app with our in-house encoder / streamer / decoder solution that is currently in the works.
  3. Allow for simulation software besides Unity
  4. Replace the current VR headset with something nicer (already purchased, not yet tried).

MAC 218 Android Build

One of our projects is to have a complete model of the Moss Arts Center, both for Mirror Worlds and as part of the Assets we can share with ICAT affiliates.

A recent alum, Lucas Freeman, was assigned to work with me for independent study in his last semester at VT.  I had him model room 218 in the MAC, load the model in Unity and light it appropriately.

Moss Arts Center Room 218, modeled by Lucas Freeman
Moss Arts Center Room 218, modeled by student Lucas Freeman

I then built a little demo of that for Android.  You can download it here:

(Android only)

To install, you will need to go your phone’s Settings menu, then go to Security, then enable Unknown Sources.  This allows your phone to install an app from a source other than the Play Store.  You can turn this back off as soon as you’re done installing this app.

In the near future, I will get this demo also built for iOS, and then published to both the Play Store and the Apple Store.

Disclaimer: This material is based upon work supported by the National Science Foundation under Grant No. 1305231.  Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

ICAT Drive

ICAT Drive is now live –

ICAT Drive is a central repository for all things ICAT.  Manuals, assets, useful code, etc.  The above link is open to anyone.  Most of the documents and assets in that folder are open to the public.  There are a few things, like the passwords document, that require special permissions which you can acquire by contacting ICAT.

We would really like the ICAT Drive to be an important resource for all projects that use the ICAT facilities.  The guides, APIs, and assets within should allow both faculty and students to jump start their projects without having to start from square one.

So far, I have published a series of manuals on how to use the motion capture systems in the Cube and the Perform Studio, and how to interface those motion capture systems with Unity.  That comes with the necessary assets I created to enable the interface.  There is also a guide on how to import point clouds into Unity (along with another unitypackage with scripts to enable this utility).  And holding it all together is a guide with an example project that shows how to put all the other guides together.  The guides are organized modularly, making it easy to refer to and edit any given operation.

There are also assets.  Right now there is a model of the Cube and some neutral character models for use in any project.  This will be greatly expanded on over the next few months.

Particle Physics Education – Belle II model

One of the new ICAT SEAD grants for 2016-2017 is a pedagogical particle physics simulation in the Cube.  One of the first steps, is to get the Belle II model working in Unity.  This is a bit of a task.

Transparency view of a small portion of the Belle II model
Transparency view of a small portion of the Belle II model

One of the PIs, Leo Piilonen from the VT Physics department, has to first export the model from an arcane format that the researchers (at KEK?) use, to a more readable format, like VRML.  VRML itself is an old open format that had some hype in the 90s but never really took off.  Thankfully, 3DS Max can import it, so I can organize and do any necessary edits to the meshes.  That’s also a task.  The meshes are all separated out into the smallest possible unit.  I’m not exactly sure at this point, but I think there might be somewhere around 500k.  3D modeling software can handle high polygon counts, but it’s not used to dealing with so many different objects.  So, I have to do some manual combination, etc, rename the materials so that they’re unique, etc.

Cutaway view of a small portion of the Belle II model
Cutaway view of a small portion of the Belle II model

Anyway, after some headache, we can import the meshes group by group into Unity.  I’ve only imported a small amount so far, just to make sure my pipeline works and to adjust as needed.  The next two issues will be

  1. Decide how to deal with transparency and depth-sorting (every object has a transparent material, so I have to make some decisions here).
  2. Write a script that allows the user to zoom in on the object by scaling up the model of the detector while seemingly keeping the user at the same position.

More to come soon!


Motion Capture Streaming Receiver

I just finished integrating the Unity interfaces for our two motion capture systems!  Now all you have to do is add the prefab to your project and set the drop down to which system you want to use!


In our primary facility, the Cube, we use a Qualisys motion capture system.  Qualisys Track Manager streams (over UDP, in other words over WiFi) rigid body position and rotation data to clients, which control virtual reality headsets and what not.  The clients that I take care of run Unity.  When I started working here, we had an interface script which received this data and allowed the Unity user to plug in which game object should be moved based on this data.  It worked great, it was a well written script.

Then, over in the Perform studio, which is our “mini-Cube” prototyping space, we have an OptiTrack motion capture system.  OptiTrack advertises its NatNet SDK, which is a middle man that receives the OptiTrack data, then reformats it in order to send out to multiple other clients.  It increases latency and means that an extra program needs to run (the NatNet SDK).

Piggy-backing on the work of another OptiTrack user, I wrote a script that receives the data directly from the OptiTrack stream.  I then wrote a wrapper that instantiates both the Qualisys Track Manager interface as well as the OptiTrack Motive Body interface, and provides a handy context-sensitive interface for setting the parameters for each of the systems.

I had to learn to create a custom object inspector using the UnityEditor library.  I didn’t do anything too fancy, but it was fun to learn!

Wind Field

(Above video, severe weather visualization project, screen capture of Goshen wind field simulation, May 25, 2016)

First post on our new blogs!

Recently, I’ve been spending a lot of time working on visualizing severe weather, working closely with Trevor White, a grad student in Virginia Tech’s meteorology department, as well as the PI of the project Bill Carstensen, from the Geography department.  The Weather Channel was just here last week, taping some spots on the El Reno tornado of 2013 and Hurricane Charley from 2004.  I will post media of those once they reach broadcast.

The video at the top of this post shows a wind field – a simulation of wind inside a tornado!  The colors are coded as follows:

Bright red is going up fast
Bright blue is going down fast
Bright white is going horizontally fast
Dim red is going up slowly
Dim blue is going down slowly
Dim gray is going horizontally slowly

Most of my time in the severe weather visualization has been devoted to the wind field particle simulation.  I’m not a meteorologist so I can’t explain this 100% effectively, but here’s the basic idea:

We have some data sets that show wind velocities (that is, how fast the wind is going and in what direction) inside a tornado.  This data is derived from a dual-doppler setup, where two doppler radars are positioned at 90 degrees to a storm, allowing them to use triangulation to get the aforementioned wind velocities.  After Trevor has analyzed and cleaned the data, he sends it to me a 3D grid of voxels (that is, a 3D grid of evenly spaced points), each of which has a wind velocity.

I bring that data into Unity, where I use it to create a particle simulation.  I generate particles throughout the volume (the area where we have wind information), and then I figure out which voxel they are in.  I use that voxel’s wind velocity data to move the particle.  Then the next frame, I find the new position of the particle, figure out what voxel it’s now in, and use that wind velocity data to move it again.

Essentially, this gives us a highly accurate 3D visualization of how the wind is flowing at a single moment of the storm.  It looks sweet, and I’ve been building a lot of features to aid the visualization, like motion trails that show where a particle has been, arrows that show the wind direction in each voxel, etc.

Because the simulation requires a lot of calculation, it originally slowed down Unity to a crawl when calculating 60,000 or more particles.  But, I’ve moved the calculations over to a console application which does all the calculations in CUDA, allowing me to use the parallelization of the graphics card to easily push out the calculations necessary.  I then open transfer that data from my console application to Unity using UDP.  Right now, UDP is actually the biggest bottleneck, because I have to transfer the position and color information of each particle AND each point of the motion trail to Unity 60 times per second.  That’s a lot of data.  Hopefully Unity will move out of the stone age soon and upgrade to a more recent .NET framework (it’s stuck in 2/3.5 right now because of the way it uses Mono), at which time I can use the Cudafy library inside Unity itself, or more likely in a plugin I can write.