<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#ffffff">
Hello Helmut,<br>
<br>
8Gb should be more than enough to solve this 200000 DOF problem.<br>
Getdp binaries are compiled with the PETSc solver umfpack. Have you
tried it? Or are you using GMRES?<br>
getdp myfile.pro -solve myresolution -ksp_type preonly -pc_type lu
-pc_factor_mat_solver_package umfpack
<br>
<br>
But the best solution when tackling big problems is probably to <br>
1- recompile your own petsc with openmpi support and MUMPS<br>
2- compile getdp with your petsc<br>
...which will provide you a parallel 'solve'. The preprocessing will
remain serial.<br>
<br>
see:<br>
<a class="moz-txt-link-freetext" href="https://geuz.org/trac/getdp/wiki/PETScCompile">https://geuz.org/trac/getdp/wiki/PETScCompile</a> <br>
./configure --CC=/opt/local/bin/openmpicc
--CXX=/opt/local/bin/openmpicxx --FC=/opt/local/bin/openmpif90
--with-debugging=0 --with-clanguage=cxx --with-shared=0 --with-x=0
--download-mumps=1--download-parmetis=1--download-scalapack=1
--download-blacs=1 --with-scalar-type=complex<br>
<br>
Good luck!<br>
<br>
Guillaume<br>
<br>
<br>
<br>
On 14/12/2010 06:28, Helmut Müller wrote:
<blockquote
cite="mid:25315A0F-1401-4985-BBB6-C61B591BA648@ingenieurbuero-mbi.de"
type="cite"><span class="Apple-style-span" style="font-family:
Times;">
<pre>Hi all,</pre>
<pre>
</pre>
<pre>first I´d like to thank you for this impressive software!</pre>
<pre>
</pre>
<pre>I use it for (quite simple) simulations regarding buildingphysics, I just solve heat equations. Therefore I have quite large models (2m by 2m by 2m) with some small parts or thin layers (10mm).</pre>
<pre>Unfortunately I have to spend a lot of time adjusting the mesh and/or simplify the geometry because I didn´t manage to solve Problems with more than approx. 220.000 DOF on my Mac (8GB RAM, quadCore ). Those problems are solved within seconds or few minutes.</pre>
<pre>
</pre>
<pre>From working with other FEM Simulations I know that it is really important to have a "good" mesh, but I´d like to spend less time for optimization of the geometry and/or the mesh for the price of longer calc times on larger models. A longer calculation time </pre>
<pre>would cost me far less than optimization. </pre>
<pre>
</pre>
<pre>In this context I have read a mail on the list:</pre>
<pre>> This has been successfully tested with both iterative (GMRES+SOR) and </pre>
<pre>> direct (MUMPS) parallel solvers on up to 128 CPUs, for test-cases up to </pre>
<pre>> 10 millions dofs.</pre>
<pre>
</pre>
<pre>With which trick or procedure has this been done ? On which Platform ? How can I at least use the available memory to perform such calculations ( my getdp 2.1 on MacOS (binary download) uses only a small part ca. 1GB of the available memory, pets fails with </pre>
<pre>a malloc error message, the new release of getdp uses all cores but with no benefit for maximum possible model size in respect to DOF. So I assume with 8GB it should be possible to do calculations of at least 500000 DOF.</pre>
<pre>
</pre>
<pre>So, what do I miss ? Could partitioning of the mesh and doing separate, iterative calculations be a solution ?</pre>
<pre>
</pre>
<pre>Thanks in advance for any suggestion. I assume that other people are interested in this too.</pre>
<pre>
</pre>
<pre>Helmut Müller</pre>
</span>
<div><br>
</div>
<pre wrap="">
<fieldset class="mimeAttachmentHeader"></fieldset>
_______________________________________________
getdp mailing list
<a class="moz-txt-link-abbreviated" href="mailto:getdp@geuz.org">getdp@geuz.org</a>
<a class="moz-txt-link-freetext" href="http://www.geuz.org/mailman/listinfo/getdp">http://www.geuz.org/mailman/listinfo/getdp</a>
</pre>
</blockquote>
<br>
</body>
</html>