a fully-parallel μ-FE code SourceForge.net Logo


Configuration and Installing

ParFE is configured and install using GNU autotools. Ideally, you should just type something like

$ cd parfe
$ ./configure
$ make
$ make install
Practically, the current version of ParFE requires a few more parameters. Although configure tries to detect compilers and libraries available on your machine and set up things accordingly, it has to be know something about your MPI, HDF5 and Trilinos installation; see below.

MPI

ParFE requires MPI to compile. MPI is the de-facto standard for distributed memory computations, and it is available for almost any parallel machines. We have successfully tested MPICH, Open-MPI and LAM/MPI, but other MPI's should be fine as well.

HDF5

The parallel I/O library HDF5 must be installed on your system with support for MPI. This requires --enable-parallel in your configure scripts for HDF5. HDF5 is a portable, high-performance I/O library, developed at the National Center for Supercomputing Applications. HDF5 supports a simple but very flexible file format, which consists of two fundamental object types: datasets, which contains n-dimensional arrays of arbitrary data, and groups, which may be used to arrange related datasets in a hierarchical structure, similar to a Unix file system. Parallel I/O is performed by HDF5 by the so-called hyperslabs. For our applications, a hyperslab is simply a linear distribution of an array of data, whose chunks can be read in a distributed and concurrent manner by the processors involved in the computation. Once a distributed object, like a multivector, has been read from file in a linear distribution, it can be redistributed to match the desired data layout using the Import and Export classes of Epetra. If a parallel filesystem (e.g. RAID) is available, the HDF5 library should be installed with the MPI-IO functions enabled. The application can then take advantage of parallel disk I/O.

ParMETIS

METIS and ParMETIS, a collection of graph partitioning library, must be available. Please use METIS 4.x and ParMETIS 3.x with ParFE. Currently, newest version of both packages (i.e. METIS 5.x and ParMETIS 4.x) are not compatible with Trilinos; do not installed them yet. Additional note, installing ParMETIS 3.x is normally enough to run ParFE since METIS related files are also created while applying make to build ParMETIS.

Trilinos

ParFE uses several Trilinos packages. You need version 7.0 or higher; we have tested it using version 7.0.3. The Trilinos framework must be installed with support for MPI and the following packages enabled: AztecOO, Epetra, EpetraExt, IFPACK, ML, Amesos and Teuchos. ML should also be configured to support the METIS library described above (--with-ml_metis. We suggest to install Trilinos by calling make install.
Note: before compiling ParFE, you have to add the following lines to Epetra_SerialDenseMatrix.h:
inline bool operator == (const Epetra_SerialDenseMatrix& Source) const
{
    return(false);
}
You can add these at line 280, for example. Take care of modifying the file that is then included by ParFE.

Sample Configure Scripts

Presently it is expected to work on most Linux PCs and clusters and the CRAY XT3. You may tune the configuration by specifying appropriate environment variables and/or command-line options. The Trilinos configure script for CSCS's CRAY XT3 is here.

Postprocessing Tools

Paraview can be used to visualize the solution. A python script, createxmf.py, is provided to create an *.xmf file to open the solution in Paraview. To run the converter, h5py should also be installed. Optionally, one can use NumPy to access the data sets and analyze the solution further.