<div dir="ltr">Thank you very much David,<br>I have converted all the dataset in all the machines to pvd. Now it works very very better. I think there will be no problem as "Error: insuficiant memory".<br><br> Salman<br>
<br><div class="gmail_quote">2010/3/4 David E DeMarle <span dir="ltr"><<a href="mailto:dave.demarle@kitware.com">dave.demarle@kitware.com</a>></span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="im">Yep, the reader for legacy vtk format is not parallel compliant.<br>
<br>
In that case what _should_ happen is that each of the n nodes in your<br>
ad-hoc cluster will read the whole file, and then the pipeline will<br>
crop out on each node the other (n-1)/n'th of the cells. So<br>
temporarily the memory consumption will be very high, but most of that<br>
will be freed right away.<br>
<br>
If you can read the file, you could convert it to a parallel compliant<br>
format with paraview by reading it in, then saving the data in either<br>
paraview (pvd), ensight, or exodus format. All of those are better<br>
parallelized and won't have the temporary memory consumption behavior<br>
of the legacy format.<br>
<br>
</div><div class="im">David E DeMarle<br>
Kitware, Inc.<br>
R&D Engineer<br>
28 Corporate Drive<br>
Clifton Park, NY 12065-8662<br>
Phone: 518-371-3971 x109<br>
<br>
<br>
<br>
</div><div><div></div><div class="h5">On Thu, Mar 4, 2010 at 12:24 PM, Salman SHAHIDI <<a href="mailto:salshahidi@gmail.com">salshahidi@gmail.com</a>> wrote:<br>
> The file format is VTK (filename is MyTest129.vtk).<br>
><br>
> 2010/3/4 David E DeMarle <<a href="mailto:dave.demarle@kitware.com">dave.demarle@kitware.com</a>><br>
>><br>
>> ParaView tries hard NOT to ship data over the network, so every<br>
>> machine potentially has to read the whole file.<br>
>><br>
>> So, if the file format itself isn't partitioned into multiple files<br>
>> (in which case you could possibly get by with putting the right sub<br>
>> file on each disk), and unless you have a shared filesystem,<br>
>> replication onto each disk is your only option.<br>
>><br>
>> Again, which file format are you reading? What is the filename<br>
>> including the extension.<br>
>><br>
>> David E DeMarle<br>
>> Kitware, Inc.<br>
>> R&D Engineer<br>
>> 28 Corporate Drive<br>
>> Clifton Park, NY 12065-8662<br>
>> Phone: 518-371-3971 x109<br>
>><br>
>><br>
>><br>
>> On Thu, Mar 4, 2010 at 11:33 AM, Salman SHAHIDI <<a href="mailto:salshahidi@gmail.com">salshahidi@gmail.com</a>><br>
>> wrote:<br>
>> > Each time i should copy the same data in all the workstations in order<br>
>> > to<br>
>> > use all them. In your opinion is it the good way?<br>
>> ><br>
>> > 2010/3/4 David E DeMarle <<a href="mailto:dave.demarle@kitware.com">dave.demarle@kitware.com</a>><br>
>> >><br>
>> >> An ad-hoc cluster like you've got is fine, as long as you have MPI set<br>
>> >> up on the machines and are running a copy of paraview's pvserver on it<br>
>> >> that has been compiled to use MPI. (Our binaries do not.)<br>
>> >><br>
>> >> The data type (Unstructured Grid) doesn't matter, I think all VTK data<br>
>> >> structure types can be split up (aka streamed). It is the file format<br>
>> >> (*.vtk, *.vt?, *.xdmf, *.exo, *.case etc) that determines what reader<br>
>> >> is invoked and thus whether the data will be read in in parallel or<br>
>> >> not.<br>
>> >><br>
>> >> David E DeMarle<br>
>> >> Kitware, Inc.<br>
>> >> R&D Engineer<br>
>> >> 28 Corporate Drive<br>
>> >> Clifton Park, NY 12065-8662<br>
>> >> Phone: 518-371-3971 x109<br>
>> >><br>
>> >><br>
>> >><br>
>> >> On Thu, Mar 4, 2010 at 11:00 AM, Salman SHAHIDI <<a href="mailto:salshahidi@gmail.com">salshahidi@gmail.com</a>><br>
>> >> wrote:<br>
>> >> > Thank you David,<br>
>> >> ><br>
>> >> > My dataset is of type: "Unstructured Grid"<br>
>> >> > Otherwise, i have 2 other questions:<br>
>> >> > 1) what are the datasets that the readers are able to break up them?<br>
>> >> > 2) I have not a cluster, thus i copied the same dataset in all WS. Is<br>
>> >> > it<br>
>> >> > the<br>
>> >> > correct manner to have parallel computations?<br>
>> >> ><br>
>> >> > Faithfully yours,<br>
>> >> ><br>
>> >> > Salman<br>
>> >> ><br>
>> >> > 2010/3/4 David E DeMarle <<a href="mailto:dave.demarle@kitware.com">dave.demarle@kitware.com</a>><br>
>> >> >><br>
>> >> >> Which data file format?<br>
>> >> >><br>
>> >> >> Not all readers are able to break up the data well, in which case<br>
>> >> >> paraview handles it in one of several ways, none of which is ideal.<br>
>> >> >><br>
>> >> >> David E DeMarle<br>
>> >> >> Kitware, Inc.<br>
>> >> >> R&D Engineer<br>
>> >> >> 28 Corporate Drive<br>
>> >> >> Clifton Park, NY 12065-8662<br>
>> >> >> Phone: 518-371-3971 x109<br>
>> >> >><br>
>> >> >><br>
>> >> >><br>
>> >> >> On Thu, Mar 4, 2010 at 9:36 AM, Salman SHAHIDI<br>
>> >> >> <<a href="mailto:salshahidi@gmail.com">salshahidi@gmail.com</a>><br>
>> >> >> wrote:<br>
>> >> >> > Hi All,<br>
>> >> >> ><br>
>> >> >> > I have 8 debian workstation (WS) A,B,C,D,E,F,G, and H with<br>
>> >> >> > paraview<br>
>> >> >> > 3.6.1. I<br>
>> >> >> > have configured all them by ssh without need to passeword. Each WS<br>
>> >> >> > has 2<br>
>> >> >> > cores, thus 16 processors are accessible. In WS A i have a<br>
>> >> >> > machine<br>
>> >> >> > file<br>
>> >> >> > consisting of all the machine names. I have copied the same<br>
>> >> >> > dataset<br>
>> >> >> > in<br>
>> >> >> > all<br>
>> >> >> > the machines too (I am not sure if this is correct or not). The<br>
>> >> >> > problem<br>
>> >> >> > is<br>
>> >> >> > the memory consumption of ParaView. By 8 WS there is not 8 times<br>
>> >> >> > memory<br>
>> >> >> > disponibility. I hoped, when I run on 8 machines, that the memory<br>
>> >> >> > consumption is 1/8 of the size on each machine, than when I use<br>
>> >> >> > only<br>
>> >> >> > one<br>
>> >> >> > machine. So what is the reason for this? Do I need special<br>
>> >> >> > configuration<br>
>> >> >> > to<br>
>> >> >> > minimize memory consumption?<br>
>> >> >> ><br>
>> >> >> > Thank you all,<br>
>> >> >> ><br>
>> >> >> > Salman<br>
>> >> >> ><br>
>> >> >> > ----------------------------------------<br>
>> >> >> ><br>
>> >> >> > Note:<br>
>> >> >> ><br>
>> >> >> > Command line in the first workstation A:<br>
>> >> >> ><br>
>> >> >> > mpirun --mca btl_tcp_if_include eth0 -machinefile<br>
>> >> >> > mymachinefile.txt<br>
>> >> >> > -np<br>
>> >> >> > 16<br>
>> >> >> > /usr/local/bin/pvserver --use-offscreen-rendering<br>
>> >> >> > Listen on port: 11111<br>
>> >> >> > Waiting for client...<br>
>> >> >> > Client connected.<br>
>> >> >> ><br>
>> >> >> > Then in paraview executed also in WS A i add a localhost that<br>
>> >> >> > refers<br>
>> >> >> > to<br>
>> >> >> > all<br>
>> >> >> > the 8 servers.<br>
>> >> >> ><br>
>> >> >> > _______________________________________________<br>
>> >> >> > Powered by <a href="http://www.kitware.com" target="_blank">www.kitware.com</a><br>
>> >> >> ><br>
>> >> >> > Visit other Kitware open-source projects at<br>
>> >> >> > <a href="http://www.kitware.com/opensource/opensource.html" target="_blank">http://www.kitware.com/opensource/opensource.html</a><br>
>> >> >> ><br>
>> >> >> > Please keep messages on-topic and check the ParaView Wiki at:<br>
>> >> >> > <a href="http://paraview.org/Wiki/ParaView" target="_blank">http://paraview.org/Wiki/ParaView</a><br>
>> >> >> ><br>
>> >> >> > Follow this link to subscribe/unsubscribe:<br>
>> >> >> > <a href="http://www.paraview.org/mailman/listinfo/paraview" target="_blank">http://www.paraview.org/mailman/listinfo/paraview</a><br>
>> >> >> ><br>
>> >> >> ><br>
>> >> ><br>
>> >> ><br>
>> ><br>
>> ><br>
><br>
><br>
</div></div></blockquote></div><br></div>