run getdp backstage
Christophe Geuzaine
Christophe.Geuzaine at ulg.ac.be
Wed Jun 13 16:33:23 CEST 2001
Lin Ji wrote:
>
> Hi, Christophe,
> Thank you for the advice. May I provide a little suggestion for you about
> the postprocessing data format according to my experience using 'getdp'? When
> we want to output the results on a regular meshing (e.g. rectangular in 2D),
> we don't really need information about time steps and the position of each
> point. They are redundant. We only need the solution value at these points
> and in each time step. As long as you let me know the order you write the
> solution (e.g. (t,y,x) or (x,y,t) in 2D), we should be able to get the
> solution right. This way, it only requires 1/5 of the disk space used by
> 'TimeTable' and it saves time for me to input the data into for example
> 'matlab'.
Yes, good point. Though, note that if you use the Table format, you are
almost optimium since you store the geometrical information only once
per point. Anyway, we may provide a user-defined file format in the
future.
> Also, can you give me an idea how much is the problem size increased when
> I change the interpolation oreder from 1 to 2 on the same meshing? If I half
> my meshing size, how much more memory does 'getdp' need for the 1st and 2nd
> order respectively?
> My current computer (Sun Ultra 10) has 128Mb memory and it took 4 hours to
> solve a problem with 150x300 nodes and 500 time steps using the 2nd
> resolution (wave_t_ex) and 2nd order interpolation. The size of 'test.res' is
> 1.5Gb.
That's big. Do you really need all the information contained in this
'.res' file? (i.e. do you need to save the solution at each time step?).
> However, it took 4 hours for the postprocessing (TimeTable data
> format) only to finish 1/3 of the output of the result on 75x150 nodes. I
> shut it down before it finished. I guess you might want to investigate it a
> little bit. The postprocessing should not take longer than solveing the
> problem.
Could you monitor the time spent reading the '.res' file ? My guess is
that most of the 4 hours are spend readig the 1.5 Gb file, and swapping
after that since everything resides in memory! What you should do is use
the '-separate' flag together with '-solve'. This way, each solution
will be saved in a separate '.res' file. Then, you can very simply apply
the post-processing on a bunch of these '.res' files (e.g. 1 out of
100), by using the '-res' command line flag to load '.res' file
selectively. Another point is that, if getdp spends too much time
reading files, you should use a binary file format (only available for
Gmsh output at the moment...).
--
Christophe Geuzaine
Tel: 32 (0) 4 366 37 10 http://geuz.org
Fax: 32 (0) 4 366 29 10 mailto:Christophe.Geuzaine at ulg.ac.be