[Getdp] large problems with getdp
Helmut Müller
helmut.mueller at ingenieurbuero-mbi.de
Tue Dec 14 12:28:59 CET 2010
Hi all,
first I´d like to thank you for this impressive software!
I use it for (quite simple) simulations regarding buildingphysics, I just solve heat equations. Therefore I have quite large models (2m by 2m by 2m) with some small parts or thin layers (10mm).
Unfortunately I have to spend a lot of time adjusting the mesh and/or simplify the geometry because I didn´t manage to solve Problems with more than approx. 220.000 DOF on my Mac (8GB RAM, quadCore ). Those problems are solved within seconds or few minutes.
From working with other FEM Simulations I know that it is really important to have a "good" mesh, but I´d like to spend less time for optimization of the geometry and/or the mesh for the price of longer calc times on larger models. A longer calculation time
would cost me far less than optimization.
In this context I have read a mail on the list:
> This has been successfully tested with both iterative (GMRES+SOR) and
> direct (MUMPS) parallel solvers on up to 128 CPUs, for test-cases up to
> 10 millions dofs.
With which trick or procedure has this been done ? On which Platform ? How can I at least use the available memory to perform such calculations ( my getdp 2.1 on MacOS (binary download) uses only a small part ca. 1GB of the available memory, pets fails with
a malloc error message, the new release of getdp uses all cores but with no benefit for maximum possible model size in respect to DOF. So I assume with 8GB it should be possible to do calculations of at least 500000 DOF.
So, what do I miss ? Could partitioning of the mesh and doing separate, iterative calculations be a solution ?
Thanks in advance for any suggestion. I assume that other people are interested in this too.
Helmut Müller
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.geuz.org/pipermail/getdp/attachments/20101214/ba7ba074/attachment.html>