Particle Physics Education – Belle II model

One of the new ICAT SEAD grants for 2016-2017 is a pedagogical particle physics simulation in the Cube.  One of the first steps, is to get the Belle II model working in Unity.  This is a bit of a task.

Transparency view of a small portion of the Belle II model
Transparency view of a small portion of the Belle II model

One of the PIs, Leo Piilonen from the VT Physics department, has to first export the model from an arcane format that the researchers (at KEK?) use, to a more readable format, like VRML.  VRML itself is an old open format that had some hype in the 90s but never really took off.  Thankfully, 3DS Max can import it, so I can organize and do any necessary edits to the meshes.  That’s also a task.  The meshes are all separated out into the smallest possible unit.  I’m not exactly sure at this point, but I think there might be somewhere around 500k.  3D modeling software can handle high polygon counts, but it’s not used to dealing with so many different objects.  So, I have to do some manual combination, etc, rename the materials so that they’re unique, etc.

Cutaway view of a small portion of the Belle II model
Cutaway view of a small portion of the Belle II model

Anyway, after some headache, we can import the meshes group by group into Unity.  I’ve only imported a small amount so far, just to make sure my pipeline works and to adjust as needed.  The next two issues will be

  1. Decide how to deal with transparency and depth-sorting (every object has a transparent material, so I have to make some decisions here).
  2. Write a script that allows the user to zoom in on the object by scaling up the model of the detector while seemingly keeping the user at the same position.

More to come soon!

 

Motion Capture Streaming Receiver

I just finished integrating the Unity interfaces for our two motion capture systems!  Now all you have to do is add the prefab to your project and set the drop down to which system you want to use!

MotionCaptureStreamingReceiver

In our primary facility, the Cube, we use a Qualisys motion capture system.  Qualisys Track Manager streams (over UDP, in other words over WiFi) rigid body position and rotation data to clients, which control virtual reality headsets and what not.  The clients that I take care of run Unity.  When I started working here, we had an interface script which received this data and allowed the Unity user to plug in which game object should be moved based on this data.  It worked great, it was a well written script.

Then, over in the Perform studio, which is our “mini-Cube” prototyping space, we have an OptiTrack motion capture system.  OptiTrack advertises its NatNet SDK, which is a middle man that receives the OptiTrack data, then reformats it in order to send out to multiple other clients.  It increases latency and means that an extra program needs to run (the NatNet SDK).

Piggy-backing on the work of another OptiTrack user, I wrote a script that receives the data directly from the OptiTrack stream.  I then wrote a wrapper that instantiates both the Qualisys Track Manager interface as well as the OptiTrack Motive Body interface, and provides a handy context-sensitive interface for setting the parameters for each of the systems.

I had to learn to create a custom object inspector using the UnityEditor library.  I didn’t do anything too fancy, but it was fun to learn!

Wind Field

(Above video, severe weather visualization project, screen capture of Goshen wind field simulation, May 25, 2016)

First post on our new blogs!

Recently, I’ve been spending a lot of time working on visualizing severe weather, working closely with Trevor White, a grad student in Virginia Tech’s meteorology department, as well as the PI of the project Bill Carstensen, from the Geography department.  The Weather Channel was just here last week, taping some spots on the El Reno tornado of 2013 and Hurricane Charley from 2004.  I will post media of those once they reach broadcast.

The video at the top of this post shows a wind field – a simulation of wind inside a tornado!  The colors are coded as follows:

Bright red is going up fast
Bright blue is going down fast
Bright white is going horizontally fast
Dim red is going up slowly
Dim blue is going down slowly
Dim gray is going horizontally slowly

Most of my time in the severe weather visualization has been devoted to the wind field particle simulation.  I’m not a meteorologist so I can’t explain this 100% effectively, but here’s the basic idea:

We have some data sets that show wind velocities (that is, how fast the wind is going and in what direction) inside a tornado.  This data is derived from a dual-doppler setup, where two doppler radars are positioned at 90 degrees to a storm, allowing them to use triangulation to get the aforementioned wind velocities.  After Trevor has analyzed and cleaned the data, he sends it to me a 3D grid of voxels (that is, a 3D grid of evenly spaced points), each of which has a wind velocity.

I bring that data into Unity, where I use it to create a particle simulation.  I generate particles throughout the volume (the area where we have wind information), and then I figure out which voxel they are in.  I use that voxel’s wind velocity data to move the particle.  Then the next frame, I find the new position of the particle, figure out what voxel it’s now in, and use that wind velocity data to move it again.

Essentially, this gives us a highly accurate 3D visualization of how the wind is flowing at a single moment of the storm.  It looks sweet, and I’ve been building a lot of features to aid the visualization, like motion trails that show where a particle has been, arrows that show the wind direction in each voxel, etc.

Because the simulation requires a lot of calculation, it originally slowed down Unity to a crawl when calculating 60,000 or more particles.  But, I’ve moved the calculations over to a console application which does all the calculations in CUDA, allowing me to use the parallelization of the graphics card to easily push out the calculations necessary.  I then open transfer that data from my console application to Unity using UDP.  Right now, UDP is actually the biggest bottleneck, because I have to transfer the position and color information of each particle AND each point of the motion trail to Unity 60 times per second.  That’s a lot of data.  Hopefully Unity will move out of the stone age soon and upgrade to a more recent .NET framework (it’s stuck in 2/3.5 right now because of the way it uses Mono), at which time I can use the Cudafy library inside Unity itself, or more likely in a plugin I can write.