Cryo-EM has received global recognition for its capacity to resolve high-resolution structures of macro molecules, and deservingly, was the topic of the 2017 Nobel Prize in Chemistry. This recent spike in attention to cryo-EM has developed a major need for training tools for incoming scientists. To this end, funding institutions such as NIH, have set aside funds to help develop tools such as website, online courses, and videos for cryo-EM newcomers. In May 2018, the Purdue cryo-EM facility was recently awarded an NIH grant to develop VR training tools for cryo-EM. The PI’s on this grant include Dr. Wen Jiang, Dr. Victor Chen, Dr. William Watson, and Dr. Tommy Sors and two graduate students funded by this research include Brenda Gonzalez and me.

Fig 1: Physical CP3 Device

The goal of Purdue’s NIH Cryo-EM VR Training grant is to develop Virtual Reality training tools for familiarizing new users with equipment, such as the microscope and plunge-freezing instruments, through a safe, easily accessible, virtual environment. By introducing new users to equipment through VR, there is no risk of harming expensive materials. Also, training through VR does not take time out of regular instrument usage—a major hurdle in current training regimens.

Currently, we are working on the Gatan CP3 module, which is a key component of the whole training procedure. The Gatan CP3 is  specifically designed for preparing cryogenic bio specimen, when researchers get these specimen, they can further observe them in cryogenic electronic microscopy. This is a picture of the Gatan CP3.

The concept of plunge freezing:

  1. Add sample to a grid.
  2. Blot with filter paper using appropriate timing to achieve optimum ice thickness.
  3. Snap freeze in liquid ethane to preserve the sample for imaging on the EM.
Fig 2: Different components related to CP3 training. Screenshot from Unreal Engine 4

Tasks related to CP3 training procedure:

  1. Turn on the instrument with the CP3 switch on the side
  2. Put in the wet sponge (this will cause the humidity gauge to display 90%)
  3. Fill liquid nitrogen in the workstation separately
  4. Cool workstation to  –180.0 Celsius
  5. Load the clipped CP3 tweezers onto the plunge rod (Blue button on the plunge rod should be facing front)
  6. Pick up the pipette (p2)
  7. Pick up the sample from the tube in the yellow tube rack holder – pipette color change
  8. Open the sample port on the CP3 chamber
  9. Load the sample into the hanging tweezers with the pipette –pipette color change to indicate successful sample loading
  10. Close sample port
  11. Twist the plunge rod so the blue dot now faces the side (this detail is critical because in real life, this will ruin your sample if you forget this step)
  12. Press START – blotters should touch, and display should countdown. Blotters should retract when timer reaches 0 seconds
  13. The plunge rod should lower, and the tweezers should contact the small ethane cup inside the LN2 workstation.
  14. Users should be able to reset the whole tutorial through the UI to reset everything in the VR environment
  15. There should also be an option for PRACTICE or TEST mode in the UI. The practice mode can have hints available and a guide of steps. Test mode will not have these features.

Currently, almost all these tasks are implemented and polished.

My Duty:

Work as the research assistant (main developer) for this project:

  • Manage and maintain all the project progress. Status reports, change logs and technical articles are well documented.
  • Design the systematic structure of the training simulator. Currently, the demo consists of three different levels: one tutorial level aims at providing detailed guidance for novice users; one free trial level provides the trainees with the opportunity to practice without any suggestions; one test level provides detailed feedback about the trainees’ performance.
  • Design the whole training procedure in Unreal Engine 4. Convert the actual requirement to the visual or coding implementation, for example, when trainees need to pick up something in our virtual environment, we specifically design different hand gestures for grabbing different objects in order to provide the natural visual feedback about the grabbing behavior.
  • Refine all the 3D models and textures in order to better replicated the physical CP3 device using Maya.
  • Optimize the shaders, lighting and rendering effects in Unreal Engine 4 in order to create beautiful and vivid graphics.

CP3 Training Module Version:

  • March 2018 (V0.1):
  1. Only a few components modeled, non-interactive


May 2018 (V0.2):

  1. Almost all components and auxiliary equipment modeled (still need air and ethane tanks)
  2. Interactive. Virtual device can communicate with corresponding virtual device to perform specific task

October 2018 (V0.25):

  1. All components and auxiliary equipment modeled and revamped
  2. Interactive and improved environment

November 2018 (V0.3):

  1. Self-guided training environment incorporated, this is the tutorial level. In this level, the trainee will receive the detailed guidance about how to perform the whole procedure.
  2. Implement the step by step guidance logic in this scene. Trainees will hear narrations about the meaning of this step, and target objects will be highlighted and indicated by red arrow.

CP3 Training Video (V0.3):

For more details, please visit:

Thanks for reading…

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s