next up previous contents index
Next: 5.2 Images Up: 5.1 N-body experiments Previous: 5.1.6 Generating models   Contents   Index

5.1.7 Handling large datasets

On of NEMOs weaknesses is also it's strong point: programs must generally be able to fit all their data in (virtual) memory. Although programs usually free memory associated with data that is not needed anymore, there is a very clear maximum to the number of particles it can handle in a snapshot. By default5.8a particle takes up about 100 bytes, which limits the size of a snapshot to about 50,000 on most current workstations.

It may happen that your data was generated on a machine which had a lot more memory then the machine you want to analyze your data on. As long as you have the diskspace, and as long as you don't need programs that cannot operate on data in serial mode, there is a solution to this problem. Instead of keeping all particles in one snapshot, they are stored in several snapshots of (equal number of) bodies, and as long as all snapshots have the same time and are stored back to back, most programs that can operate serially, will do this properly and know about it. Of course it's best to split the snapshots on the machine with more memory:

    % snapsplit in=run1.out out=run1s.out nbody=10000

If it is just one particular program (e.g. snapgrid) that needs a lot of extra memory, the following may work:

    % snapsplit in=run1.out out=- nbody=1000 times=3.5 |\
        snapgrid in=- out=run1.ccd nx=1000 ny=1000 stack=t


next up previous contents index
Next: 5.2 Images Up: 5.1 N-body experiments Previous: 5.1.6 Generating models   Contents   Index
(c) Peter Teuben