Large Scale Visualization with ParaView

Date: September 25, 2017

Presented at the Argonne Training Program on Extreme-Scale Computing 2017. Slides for this presentation are available here: http://extremecomputingtraining.anl.g…

Introduction to ParaView

Date: April 18, 2012
Time: 2:00pm-3:30 p.m EST

This course provides a hands-on overview of the ParaView visualization application. The basic interactive visual exploration process is demonstrated, including data loading, data processing, adjusting parameters and data interaction. Key concepts such as cutting, clipping, contouring, probing, and glyphing are discussed in this course. This course provides examples of generating output in the form of processed data, rendered images, and animations. Prerequisites: None.

ParaView 4.0.1 Release

Date: June 13, 2013
Time: 2:00 – 2:15 p.m. EST

This webinar highlights the updates in the ParaView 4.0.1 release.

Big Data Analysis: How to leverage ParaView on HPC Resources

Date: March 26, 2013
Time: 1:30 – 2:30 p.m. EST
This webinar covers how to run ParaView on High Performance Computers to visualize massive data sets. Topics covered include: theory of distributed memory parallel visualization; compiling ParaView on HPC systems; setting up, submitting and running batch visualization sessions; and tunneling for interactive visualization on remote supercomputers.

ParaView Catalyst: Leverage In situ Analysis with VTK and ParaView

Date: September 26, 2013
Time: 2:00 – 3:00 p.m. EST
This webinar introduced ParaView Catalyst, a flexible in situ library that leverages VTK and ParaView. The webinar highlighted the benefits of ParaView Catalyst and how to leverage them for your own simulations. Catalyst enables simulation codes to link to ParaView and VTK in order to perform in situ (i.e. while the simulation is running) the same type of work that is typically done in post-processing the results of a simulation run. Using ParaView Catalyst reduces the overall time to gleaning desired information from a simulation run. This is done by using the same compute power to calculate the desired quantities as the simulation has (for MPI-enabled simulation codes typically the post-processing is done on machines with much less compute power than the simulation ran on). This has the added benefit of reducing file IO while maintaining high fidelity in the quantities computed.