[Paraview] vtkNetCDFCFReader parallel performance

Andy Bauer andy.bauer at kitware.com
Wed Feb 6 10:38:46 EST 2013


Hi Ken,

I'm having some performance issues with a fairly large NetCDF file using
the vtkNetCDFCFReader. The dimensions of it are 768 lat, 1152 lon and 9855
time steps (no elevation dimension). It has one float variable with these
dimensions -- pr(time, lat, lon). This results in a file around 33 GB. I'm
running on hopper and for small amounts of processes (at most 24 which is
the number of cores per node) and the run time seems to increase
dramatically as I add more processes. The tests I did read in the first 2
time steps and did nothing else. The results are below but weren't done too
rigorously:

numprocs -- time
1  -- 1:22
2 -- 1:52
4 -- 7:52
8 -- 5:34
16 -- 10:46
22 -- 10:37
24 -- didn't complete on hopper's "regular" node with 32 GB of memory but I
was able to run it in a reasonable amount of time on hopper's big memory
nodes with 64 GB of memory.

I have the data in a reasonable place on hopper. I'm still playing around
with settings (things get a bit better if I set DVS_MAXNODES --
http://www.nersc.gov/users/computational-systems/hopper/performance-and-optimization/hopperdvs/)
but this seems a bit weird as I'm not having any problems like this on a
data set that has spatial dimensions of 17*768*1152 with 324 time steps.

Any quick thoughts on this? I'm still investigating but was hoping you
could point out if I'm doing anything stupid.

Thanks,
Andy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.paraview.org/pipermail/paraview/attachments/20130206/78b0fe70/attachment.htm>


More information about the ParaView mailing list