Particle Physics Simulation Update

The particle physics simulation project is coming along quite nicely!  We have the vast majority of the Belle II model imported into Unity and looking good now.  There’s still plenty to work on, but check out how it looks as of right now:

Dr. Leo Piilonen has been extremely helpful in working to produce FBX files for the Belle II model rather than VRML files, and Jesse Barber, a physics undergraduate, has been working closely with me to get the model looking great.

Some of the things on my immediate to-do list:

  • Finish importing Belle II (passive EKLM)
  • Make another scale slider for scaling the Belle II without also repositioning it relative to the viewer
  • Fix the opacity slider bug that I’m getting sometimes
  • Fix the particle trails so that when the Belle II is moved they don’t generate trails for that movement itself and stay consistent within the Belle II model
  • Try to improve the framerate when the entire Belle II is displayed

Stepping into the past through visualization: Exploring America’s Forgotten War

Our history visualization team is on the ground in France.

This project has been made possible with generous support from an ICAT SEAD grant.

 

Our team from TLOS, the School of Education, Department of History, School of Visual Arts, Mining an Minerals Engineering, together with our partners at Arkemine and the Friends of Vauquois Association have begun our comprehensive survey of the Butte de Vauquois, near Verdun, France.

CoUHLc7XgAYfjDi

Follow us on Twitter at: https://twitter.com/historyviz

Cyclorama

The Cube now has another amazing new feature: the Cyclorama!  We finished installing it last week.  The Cyclorama is a massive cylindrical projection screen for immersive experiences.  At roughly 40′ in diameter and 16′ tall, it can accommodate a maximum of 60 people and allows the projection to fill your entire field of vision from any where you stand.  Check out this timelapse video of the construction, which took almost 4 days:

A huge thanks goes to the Moss Arts Center production crew, who did all the hard work of actually building the thing!

It is operational and we’re starting to get the hang of how to use it.  We’ve had to readjust the motion capture camera arrangement so that we can capture both inside and outside.  I’ll put up some more posts in the coming days with details about the new motion capture setup, how to use it, etc.

Roanoke science museum planetarium

Yesterday, we paid our second visit to the Science Museum of Western Virginia’s museum planetarium facility in Roanoke.  We (ICAT, Advanced Research Computing, Architecture, Music) are working with museum staff and community volunteers to build a vision for the future use of the amazing space, which is currently under used because of outdated technology.

On this visit, in addition to our minds and some measuring tapes, we brought four projectors (from ARC’s old VisCube setup), a computer, and a Max patch which I rigged up the day before to test stitching and blending a single video across all four projectors.  It worked great!  There are still some issues to be worked out in terms of perfecting the blending and getting 4k video going smoothly, but the four projectors were able to cover almost the entire screen!  Here’s an idea of what it looked like (an image doesn’t really do it justice because the projection is so big, the camera lens can’t capture it all).  With chickens.  Don’t ask.

Better with chicken.
Better with chicken.

Belle II particle physics simulation update

We have some exciting updates from the particle physics simulation we’ve been working on for the past month!

First, we now have Unity scripts that read the particle data from a CSV file, sort it correctly, and use it to spawn and move particles.  It’s a very important first step – in fact, a lot of everything from this point on is frills.  Although there’s lots of frills.  But this is the *meat* of the project – taking particle data and visualizing it.  There’s a (slightly confusing to watch, but proof of product) video below:

ALSO, Jesse Barber (Physics sophomore) and Kaelum Hasler (high school student worker) have been working with me to get the VRML model of the Belle II organized.  We had to combine a lot of objects in order to get the object count down to something manageable (it was at something like 300k to begin with), but we also had to CUT a lot of the cylinders into four parts, in order to solve a transparency depth-sorting issue in Unity.  They spent almost all of last week working on that, and we now have a model in Unity!  This model is still incomplete because the VRML export that we’re working with doesn’t quite have the full model, but we’re almost there, and we can really see what it’s going to look like now when it’s all done!  See below for another video of that.

High Performance Wireless VR

We now have a working pipeline for wireless VR that can simultaneously be high performance (computationally and/or graphically).  Here is a flow chart:

 

High Performance VR in Cube flow chart

Despite how many different parts there are, it works reliably.

Here are the next steps to improving it:

  1. Replace Garfield with the render cluster in the Andrews Information Systems Building.
  2. Replace NVIDIA Gamestream and the Moonlight app with our in-house encoder / streamer / decoder solution that is currently in the works.
  3. Allow for simulation software besides Unity
  4. Replace the current VR headset with something nicer (already purchased, not yet tried).

MAC 218 Android Build

One of our projects is to have a complete model of the Moss Arts Center, both for Mirror Worlds and as part of the Assets we can share with ICAT affiliates.

A recent alum, Lucas Freeman, was assigned to work with me for independent study in his last semester at VT.  I had him model room 218 in the MAC, load the model in Unity and light it appropriately.

Moss Arts Center Room 218, modeled by Lucas Freeman
Moss Arts Center Room 218, modeled by student Lucas Freeman

I then built a little demo of that for Android.  You can download it here:

(Android only)
https://drive.google.com/file/d/0B789Y1umpu51OExUX0ViOHc2aWc/view?usp=sharing

To install, you will need to go your phone’s Settings menu, then go to Security, then enable Unknown Sources.  This allows your phone to install an app from a source other than the Play Store.  You can turn this back off as soon as you’re done installing this app.

In the near future, I will get this demo also built for iOS, and then published to both the Play Store and the Apple Store.

Disclaimer: This material is based upon work supported by the National Science Foundation under Grant No. 1305231.  Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

ICAT Drive

ICAT Drive is now live –

https://drive.google.com/folderview?id=0B789Y1umpu51SjBkY0xBSlRldkU&usp=sharing

ICAT Drive is a central repository for all things ICAT.  Manuals, assets, useful code, etc.  The above link is open to anyone.  Most of the documents and assets in that folder are open to the public.  There are a few things, like the passwords document, that require special permissions which you can acquire by contacting ICAT.

We would really like the ICAT Drive to be an important resource for all projects that use the ICAT facilities.  The guides, APIs, and assets within should allow both faculty and students to jump start their projects without having to start from square one.

So far, I have published a series of manuals on how to use the motion capture systems in the Cube and the Perform Studio, and how to interface those motion capture systems with Unity.  That comes with the necessary assets I created to enable the interface.  There is also a guide on how to import point clouds into Unity (along with another unitypackage with scripts to enable this utility).  And holding it all together is a guide with an example project that shows how to put all the other guides together.  The guides are organized modularly, making it easy to refer to and edit any given operation.

There are also assets.  Right now there is a model of the Cube and some neutral character models for use in any project.  This will be greatly expanded on over the next few months.

Particle Physics Education – Belle II model

One of the new ICAT SEAD grants for 2016-2017 is a pedagogical particle physics simulation in the Cube.  One of the first steps, is to get the Belle II model working in Unity.  This is a bit of a task.

Transparency view of a small portion of the Belle II model
Transparency view of a small portion of the Belle II model

One of the PIs, Leo Piilonen from the VT Physics department, has to first export the model from an arcane format that the researchers (at KEK?) use, to a more readable format, like VRML.  VRML itself is an old open format that had some hype in the 90s but never really took off.  Thankfully, 3DS Max can import it, so I can organize and do any necessary edits to the meshes.  That’s also a task.  The meshes are all separated out into the smallest possible unit.  I’m not exactly sure at this point, but I think there might be somewhere around 500k.  3D modeling software can handle high polygon counts, but it’s not used to dealing with so many different objects.  So, I have to do some manual combination, etc, rename the materials so that they’re unique, etc.

Cutaway view of a small portion of the Belle II model
Cutaway view of a small portion of the Belle II model

Anyway, after some headache, we can import the meshes group by group into Unity.  I’ve only imported a small amount so far, just to make sure my pipeline works and to adjust as needed.  The next two issues will be

  1. Decide how to deal with transparency and depth-sorting (every object has a transparent material, so I have to make some decisions here).
  2. Write a script that allows the user to zoom in on the object by scaling up the model of the detector while seemingly keeping the user at the same position.

More to come soon!

 

Motion Capture Streaming Receiver

I just finished integrating the Unity interfaces for our two motion capture systems!  Now all you have to do is add the prefab to your project and set the drop down to which system you want to use!

MotionCaptureStreamingReceiver

In our primary facility, the Cube, we use a Qualisys motion capture system.  Qualisys Track Manager streams (over UDP, in other words over WiFi) rigid body position and rotation data to clients, which control virtual reality headsets and what not.  The clients that I take care of run Unity.  When I started working here, we had an interface script which received this data and allowed the Unity user to plug in which game object should be moved based on this data.  It worked great, it was a well written script.

Then, over in the Perform studio, which is our “mini-Cube” prototyping space, we have an OptiTrack motion capture system.  OptiTrack advertises its NatNet SDK, which is a middle man that receives the OptiTrack data, then reformats it in order to send out to multiple other clients.  It increases latency and means that an extra program needs to run (the NatNet SDK).

Piggy-backing on the work of another OptiTrack user, I wrote a script that receives the data directly from the OptiTrack stream.  I then wrote a wrapper that instantiates both the Qualisys Track Manager interface as well as the OptiTrack Motive Body interface, and provides a handy context-sensitive interface for setting the parameters for each of the systems.

I had to learn to create a custom object inspector using the UnityEditor library.  I didn’t do anything too fancy, but it was fun to learn!