These are some notable success stories from simulation codes that have been catalyst enabled.

PyFR

PyFR is an open-source Python based framework for solving Euler and Navier-Stokes equations. It operates on a range of hardware platforms including CPU Clusters, Nvidia GPU Clusters and AMD GPU Clusters.

PyFR was run on Titan at Oak Ridge National Laboratory, taking advantage of Titan’s NVIDIA Tesla K20 GPUs to simulate jet engine behavior. PyFR computed the temperature, pressure and velocity on the GPUs and transferred this information along with the grid to Catalyst directly on the GPU. Catalyst utilized a VTK-m pipeline to keep the data continuously on the GPU to compute contours and slices.

The main PyFR page is at http:. //www.pyfr.org/ which has user, theory and developer information. The latest version of the code can be checked out from https://github.com/vincentlab/PyFR.

pyfr2

HPCMP CREATETM HELIOS

The HELIcopter Overset Simulations software (HELIOS) is a multi-disciplinary computational suite focused on simulating rotorcraft. It uses a variety of solvers to accurately compute the flow field, mechanical deformation and fluid-structure interaction. The fluid is computed over a hybrid mesh using adaptive Cartesian grids for regions away from the rotorcraft geometry for fast and accurate computations of the wake and unstructured grids near the rotorcraft to capture viscous turbulent flow near the body and accurate definition of the complex geometry.

ParaView Catalyst was started through a U.S. Army SBIR to support the in situ analysis and visualization needs of HELIOS. Particle path tracking is an important analysis tool for understanding the dynamic rotorcraft flow regimes and this was supported through a tight Catalyst integration. Through Catalyst, HELIOS users can specify particle injection location, frequency, when to begin injecting and output format. Additionally, the in situ particle paths support continuing particle path integration even for restarted simulations.

HELIOS is developed by the DoD’s HPCMP CREATE-Air Vehicles (CREATE-AV) program and the US Army.

helios

PHASTA

The Parallel Hierarchic Adaptive Stabilized Transient Analysis (PHASTA) code is a CFD solver that computes numerical solutions to the compressible and incompressible Navier-Stokes equations. It is a highly parallelized code that has been shown to scale to 3 million MPI ranks.

PHASTA has had a major impact on the computational performance of Catalyst by pushing the envelope of in situ analysis and visualization to ever higher process counts. Lessons learned here include:

  • Discovering the importance of freezing Python modules into a static executable to avoid all processes hitting the disk trying to load Catalyst’s Python code.
  • Loading a Catalyst Python script on a single MPI process and then broadcasting it out to avoid IO.
  • Not generating Python byte-compiled code files (.pyc files) to avoid IO.

PHASTA still maintains the high water mark with running Catalyst at 256K MPI ranks on the IBM BlueGene/Q Mira at Argonne National Laboratory in 2014. The run generated contours and output images of the results. Beyond this, PHASTA was used to investigate different in situ and in transit strategies. This was done on a combination of the Intrepid and Eureka HPC machines at Argonne National Laboratory using up to 160K cores for the simulation run. See http://dl.acm.org/citation.cfm?id=2148653 for more details on this.

phasta_v2

PHASTA is led by Dr. Kenneth Jansen at UC Boulder. It is available for download at https://github.com/PHASTA/phasta.

MPAS Ocean

mpas1

MPAS Ocean, the ocean modeling component of the Model Prediction Across Scales suite of tools, is a climate modeling code that simulates the ocean system from time scales of months to millennia and spatial scales from sub 1 km to global circulations. MPAS Ocean has demonstrated the ability to accurately reproduce mesoscale ocean activity with a local mesh refinement strategy. It uses an unstructured Voronoi mesh representation with smoothly varying element size to efficiently represent both low and high resolution regions. Additionally, it uses a C-grid discretization which is especially well-suited for higher-resolution ocean simulations.

An experimental version of MPAS Ocean has been instrumented to utilize Catalyst for in situ analysis and visualization. Based on user requirements, the adaptor that converts MPAS Ocean data structures to VTK data structures provides the grid in ten different configurations. These include a spherical representation, a latitude-longitude projection representation and local simulation representation. In addition to these layouts, both primal and dual grids are available as well as having a fully 3D grid vs. collapsed level/2D representation. Examples of three output configurations are show below. Having multiple grid configurations ensures that MPAS Ocean results can be examined in the most convenient manner for the data scientists. Additionally, by using Catalyst the data scientists can examine results during the run to understand the behavior of the simulation.

mpas2

mpas3mpas4

The MPAS-O User’s Guide is available at http://oceans11.lanl.gov/mpas_data/mpas_ocean/users_guide/release_2.0/mpas_ocean_users_guide_2.0.pdf. The code can be checked out from ????. The main citation for MPAS-O is http://www.sciencedirect.com/science/article/pii/S1463500313000760

VPIC

VPIC, short for Vector Particle-In-Cell, is a code that simulates plasma behavior. VPIC enables researchers to study plasmas in ways that exceed conventional theory- and experiment-driven approaches. To simulate plasma behavior, VPIC follows the motions of simulated particles as simulated electric and magnetic force fields push the particles. Each particle represents thousands of plasma ions and electrons. VPIC tracks particle motion in three dimensions, accounting for increases in particle masses as their speeds approach that of light, according to Einstein’s theory of special relativity.

VPIC has been run up to 180,224 MPI ranks and examining raw simulation results from even a moderate sized run can be a daunting task. Through using Catalyst, scientists can use filters such as slices and contours to easily examine the results and gain insight into the physics of interest.

vpic

VPIC is being actively developed at Los Alamos National Laboratory. It is an open source code available at https://github.com/losalamos/vpic. For more information on VPIC, contact William Daughton at daughton@lanl.gov, or see the following paper:

K. J. Bowers, B. J. Albright, L. Yin, B. Bergen, and T. J. T. Kwan, “Ultrahigh performance three-dimensional electromagnetic relativistic kinetic plasma simulation” Physics of Plasmas 15, 055703 (2008); doi: 10.1063/1.2840133.

RAGE

RAGE, short for Radiation Adaptive Grid Eulerian, is a radiation – hydrodynamics code. It is a multidimensional, multi-material, massively parallel, Eulerian hydrodynamics code that solves the Euler equations coupled with a radiation diffusion equation that can be used in a variety of high-deformation flow problems.

RAGE has utilized Catalyst to successfully produce a variety of interesting and non-trivial in situ visualizations. In addition it has been used by simulation scientists to spec out in situ workflows for analyzing their data.

For more information on RAGE, see the paper M. Gittings, R. Weaver, M. Clover, T. Betlach, N. Byrne, R. Coker, E. Dendy, R. Hueckstaedt, K. New, W. R. Oakes, D. Ranta, and R. Stafan, “The RAGE radiation – hydrodynamic code”, Comput. Sci. Disc. 1, 015005 (2008).

UH3D

UH3D is a hybrid plasma simulation code that treats ion kinetically and electrons as a fluid. It has been used to perform global simulations of the interaction of the solar wind with Earth’s magnetosphere at unprecedented detail, revealing new ion kinetic effects that are entirely absent from fluid models. UH3D is being used at LANL to study these fine-scale kinetic plasma processes and how they feed back into the global dynamics of the magnetospheres of Earth and of Jupiter’s moon Ganymede. An experimental build of UH3D coupled with Catalyst is being used to automatically generate a variety of visualizations of this simulation, to speed the investigation of results.

More information about UH3D can be found in “The link between shocks, turbulence, and magnetic reconnection in collisionless plasmas” by Karimabadi, et. al., in Physics of Plasmas 21, (2014). Information about UH3D with Catalyst is in “In-situ Visualization for Global Hybrid Simulations” by Karimabadi, et. al. in XSEDE ’13

CAM

The Community Atmosphere Model (CAM) is a global atmosphere model developed by the Atmosphere Model Working Group. CAM is used both as a standalone model and as the atmosphere component of the Community Earth System Model (CESM). CAM can be configured to use different dynamical cores. We provide an adaptor two of those: the Finite Volume and Spectral Element dynamical cores. A Catalyst Adaptor for CAM version 5.3 is provided with ParaView 5.0 in CoProcessing/Adaptors/CamAdaptor. See the ParaView online documentation (or README.md) for information on how to build CAM with the Catalyst Adaptor and a simple processing script that saves data from the simulation in VTK files. See the Catalyst documentation for how to perform a more complex processing of the simulation data.