K-12 Visiting the Cube

Editor’s Note – This post is from Phyllis Newbill, the Outreach and Engagement Coordinator here at ICAT.  Enjoy!


I had the pleasure of hosting school groups today from Mountain Vista Governor’s School and Hardin Reynolds Memorial School in Patrick County.  Thanks to Tanner and Zach, the students saw and heard demos of basketball games, elephants, anechoic chambers, cathedrals, dog skeletons, and fantasy landscapes.  The groups also visited the Experience and Create Studios to see 3D printers, the laser cutter, and a project involving lots of pencils that I haven’t quite figured out yet.  I know that graphite is conductive, though, so I think that’s a clue.

I was glad to see Mountain Vista again.  Their school has done a great job of sending projects and students to the Maker Conference each year in the spring.  Today’s group was of tenth graders, so I’m looking forward to seeing them again later this school year or in future years.

The Hardin Reynolds students happened to tour the Create Studio at the moment that Panagiotis, a CHCI grad student, was finishing a 3D print.  He showed them how the 3D printer works, and also gave them a full demo of his 3D printed olive oil factory parts.

My favorite part of the day was when the students got to see a 3D fantasy landscape in the Cube, and they reached out to see if they could touch the objects they were seeing.

VR upgrades in the Cube

While the blog has been quiet recently, ICAT is far from it.  I haven’t posted recently because I don’t have a new flashy video to show off yet, but we’ve made some excellent internal developments.

Foremost in my mind at the moment is our integration of the Oculus CV1 into the Cube motion capture system.  In the past, we’d used the DK2 as our go-to headset.  It was connected by HDMI and USB to a laptop that someone would have to carry around behind whomever had the headset on.  The DK2 itself was OK, but the resolution wasn’t fantastic.  There was also a decent amount of shakiness to the perspective coming from the mocap system.

Oh, how times have changed.  With the CV1, we can use the IMU’s rotational data for the VR perspective.  Not only does this eliminate the shakiness, but it’s super low-latency.  But in order to enable users to walk freely thru the Cube, we’re still using the position data from the motion capture system, and disregarding position data from the Oculus infrared sensor (it still has to be plugged in or the Oculus app throws a fit, but it doesn’t actually serve a purpose).  The combination of these two works *brilliantly*, resulting in by far the best VR experience we’ve ever had in the Cube.

But wait, there’s more.  Right now, we have this running off a laptop with a dedicated graphics card, which is a problem.  The laptop’s battery can’t supply enough juice to the GPU, so when the laptop is unplugged it automatically lowers the GPU’s memory clock which dramatically hurts VR performance.  Enter the MSI VR One laptop/backpack/jetpack(!)(??)/new-kind-of-computer-form-factor-that-doesn’t-really-have-a-name-yet.  This thing is designed for tetherless VR.  You put it on like a backpack, you hook all the VR stuff up to it, tie up all the cords, and you’re free to walk around the room with nary a concern of tripping on cords or having someone else hold the laptop (new meme idea: Hold My Laptop… while I do some VR).  Caveat: we haven’t actually tried this thing yet.  It JUST released, and we have one on the way.  So, it’s possible that it won’t live up to expectations, but I’m hopeful.  If this works, we will get several more, and have true, tetherless, social VR in the Cube for the first time ever.

P.S. I’m still doing a lot of work on the physics VR simulation, and it’s looking great.  I’ve built a diegetic interface.  Once we get this backpack, I’ll take some video of a user exploring the simulation, using the interface, and post that here.

Subatomic particle physics update

This project just keeps getting cooler every week.

Come see it in person on the Cyclorama in the Cube, *this Saturday!*

Virginia Tech Science Festival, October 8, 2016, 10am-3pm.

Since the last update, we have:

  • Made a few new events (datasets that represent a particle collision)
  • Made sprites for each of the particles present in those events
  • Colored in the pathlines and trails for the particles to be the same as the color of those sprites
  • Optimized, optimized, optimized!  Runs at 60FPS!
    • I dumbed down the mesh to only be about 100 objects instead of many many thousand.  We get some depth sorting issues with transparency still in certain situations, but it’s way better than it was originally and there are no frame rate issues when doing it like this.
    • I cleaned up the particle controller code to eliminate generic lists, remove unnecessary GetComponent calls, and do better OOP.  That was a dramatic FPS improvement
    • The purchase order for the FastLineRenderer package from the Unity Store finally went through, and so now I’m rendering the 200k+ lines of the paths with that.  When I do that, the lines are made into a mesh, which is the a single super fast draw call.  That was also a dramatic FPS improvement.
    • Probably some other stuff too I’ve forgotten about…
  • Also!  Tanner is working on the sound element now, and is getting the basic framework in place.  It’s super exciting hearing that come together.

Perform Studio

The Perform Studio has undergone exciting changes over the summer and into this fall semester.

First, Tanner overhauled the booth.  Instead of just being a closet crammed full with a giant rack for the patch bays, he wall-mounted them to make enough room for it to be used as an actual mixing booth.  Yay!

Perform Studio booth

Second, the OptiTrack Motive:Body software was updated from 1.6 to 1.9, with excellent new features that improve nearly every aspect of the software.  Additionally, we’ve ordered some extra capture-suit accessories.

Third, the Perform Studio is now being used as the CHCI lab.  Doug Bowman and his cohort of graduate students can use the space to run experiments in virtual reality, using the motion capture system present.  We’re very excited to have them doing their research in ICAT’s facilities.  If you’d like to use the space, though fear not!  You can still reserve it with 72 hours notice, and it will be cleared out and ready.  If you would like to reserve the space, please contact Run Yu, who is handling the reservations for Perform Studio this year.  runyu at vt dot edu

Doug Bowman's 3D Interfaces grad students in the Perform Studio, 9/26/2016
Doug Bowman’s 3D Interfaces grad students in the Perform Studio, 9/26/2016

Particle Physics Project Update

We’ve been working extremely hard on the particle physics project.  A work-in-progress version of the simulation will be at the Science Festival!  Saturday, October 8th, 10am-3pm.  This simulation will be in the Cube.  Come check it out!

Before I dig into what we’ve been working on, insert obligatory pretty video:

The main to-do list we’d like to accomplish before the Science Festival:
1. Me – Get all the particle sprites for the different kinds of particles
2. Me – Color code the path lines in coordination with the particles
3. Me – Get the full Belle II detector model in, albeit combined and completely opaque.
4. Tanner – implement draft of sound (OSC communication of particle data is complete, so he can pick it up and make it sound awesome)
5. Topher, Jesse, Sam – Make a flyer for audience to pick up on their way in
6. Topher, Jesse, Sam – Make all the images for the particle sprites, so I have them to import (there’s around 120 different kinds of particles!)

We’ve had 3 team meetings now, which are extremely helpful now that we have a full team on board.  This includes:

Christopher (Topher) Dobson – senior, physics education
Jesse Barber – junior, physics
Samantha (Sam) Spytek – senior, physics education
Tanner Upthegrove – ICAT staff, sound
Zach Duer (me) – ICAT staff, visualization / organization
Dane Webster – faculty, school of visual arts
George Glasson – faculty, school of education
Leo Piilonen – faculty, physics
Nicholas Polys – faculty, computer science
Todd Ogle – master of all things, TLOS

Just in the past few weeks, Sam and Topher have picked up on the game very quickly and set out with Jesse to do a lot of work developing the educational aspect of the simulation.  In the next update I will go into more detail on that.

In the meantime, I’ve been working my butt off optimizing the simulation to keep a decent frame rate. There were 3 major bottle-necks.  I’ll detail them here, along with the solutions:

  1. The model, with all its 400,000+ individual objects slowed the frame rate down to about 10FPS.  It’s a CPU problem – the GPU breezes through it, since it’s only about 2million polys.  The CPU is struggling with this due to the number of draw calls.  A lot of the objects are duplicates (the same mesh with the same material, but a different transform).  This lets Unity reduce the number of draw calls by batch rendering, but just the process of doing that batching slows it way down.  To maintain 60FPS, the full draw cycle has to happen in 16 milliseconds or less, so if there’s one thing that takes 0.01 milliseconds but it happens a few thousand times, it’s a huge problem.  I’m still working on this, but in the meantime, I’m combining each major layer of the BelleII detector into a single object and setting it to an opaque material.  Although this process takes some time, after I’m done the frame rate should be back up to a steady 60FPS
  2.  The lines rendered to show the paths that the particles will take over the course of the simulation, the “pathlines” as I’m calling them, take too long to render if rendered individually.  For the complex events, we’re talking about something like 4 million lines.  Even if they’re done in one draw call, it’s still just too slow if done every frame.  With just rendering these lines, it dropped the framerate to about 3FPS.  So, instead I purchased a package from the Unity Store called FastLineRenderer.  Since all the lines are known when the simulation starts, I can send them all to FastLineRenderer, and it converts them into a series of meshes (something like 64k lines per mesh).  This shoots the framerate back up to almost 60FPS.
  3. I needed to optimize the script controlling particle movement.  Yesterday and today, I completely rewrote the thing to eliminate the need for certain loops that were bottlenecking.  I won’t go into any more detail now, but the resulting code is, in my opinion, much cleaner and easier to read, AND it’s a lot faster.

All that put together, and you can how smooth the above video is.  Not all the meshes from the Belle II detector are finished combining, so I just threw in the Electromagnetic Calorimeter for that video.  The framerate never dropped below 50FPS.  I’m pretty proud of that.

Oh, and I put Lucas Freeman’s (mostly complete) Cube model in the project too, so that vastly improves the aesthetic quality.


Cyclorama Update

We’ve made significant progress recently figuring out how to render video for the Cyclorama, getting Unity working with it, and streaming video in.

Rather than expound on what we’ve learned here, I will refer you to the two new documents I posted to the ICATDrive.  One is one the pipeline for rendering from Maya to the Cyclorama, and the other describes the details of the various types of input for the Cyclorama (e.g. video, image sequences, and streamed video).  If you’re interested in using the Cyclorama, please refer to these documents for information on how to get started on getting your content ready to be played!

We’ve also been testing out a lot of content and a LOT of people have been through to look at it.  Yesterday we brought through two classes from Graphic Design (students from the classes of Katie Meaney and Patrick Finley) with the intention that they will spend some class time making projects for the Cyclorama.  They were enthusiastic about the possibilities, and we’re excited to see what they might come up with!

Graphic Design Students Visit Cyclorama 8/31/2016
Graphic Design Students Visit Cyclorama 8/31/2016
Graphic Design Students Visit Cyclorama 8/31/2016
Graphic Design Students Visit Cyclorama 8/31/2016

Particle Physics Simulation Update

The particle physics simulation project is coming along quite nicely!  We have the vast majority of the Belle II model imported into Unity and looking good now.  There’s still plenty to work on, but check out how it looks as of right now:

Dr. Leo Piilonen has been extremely helpful in working to produce FBX files for the Belle II model rather than VRML files, and Jesse Barber, a physics undergraduate, has been working closely with me to get the model looking great.

Some of the things on my immediate to-do list:

  • Finish importing Belle II (passive EKLM)
  • Make another scale slider for scaling the Belle II without also repositioning it relative to the viewer
  • Fix the opacity slider bug that I’m getting sometimes
  • Fix the particle trails so that when the Belle II is moved they don’t generate trails for that movement itself and stay consistent within the Belle II model
  • Try to improve the framerate when the entire Belle II is displayed

Stepping into the past through visualization: Exploring America’s Forgotten War

Our history visualization team is on the ground in France.

This project has been made possible with generous support from an ICAT SEAD grant.

Our team from TLOS, the School of Education, Department of History, School of Visual Arts, Mining an Minerals Engineering, together with our partners at Arkemine and the Friends of Vauquois Association have begun our comprehensive survey of the Butte de Vauquois, near Verdun, France.


Follow us on Twitter at: https://twitter.com/historyviz


The Cube now has another amazing new feature: the Cyclorama!  We finished installing it last week.  The Cyclorama is a massive cylindrical projection screen for immersive experiences.  At roughly 40′ in diameter and 16′ tall, it can accommodate a maximum of 60 people and allows the projection to fill your entire field of vision from any where you stand.  Check out this timelapse video of the construction, which took almost 4 days:

A huge thanks goes to the Moss Arts Center production crew, who did all the hard work of actually building the thing!

It is operational and we’re starting to get the hang of how to use it.  We’ve had to readjust the motion capture camera arrangement so that we can capture both inside and outside.  I’ll put up some more posts in the coming days with details about the new motion capture setup, how to use it, etc.